Concept Programming 78
descubes writes "A recent article asked about improvements in programming. Concept programming is a very simple idea to improve programming: program code should reflect application-domain concepts. What is amazing is not the idea itself, but how often we don't apply it, and how much existing tools and techniques can get in the way without us even realizing it. To be able to represent all concepts equally well, we need tools that don't force a particular, restricted vocabulary on us. The Mozart project is a Free Software project to implement concept-programming development tools. It includes the Coda universal intermediate language, the Melody persistent program representation, the Moka Java-to-Java extensible compiler, and a fairly advanced front-end for a new programming language called XL. In the long run, Mozart can give the Free Software community a foundation for something as powerful as Charles Simonyi's Intentional Programming."
Good idea - no need for new tool gimmickry (Score:2, Insightful)
I write lisp macros, essentially extending the language and customising the object system to incorporate the domain-specific concepts, rolled into a package.
It's the way many forth coders work too, BTW.
I find it interesting that lisp, coming from the really high-end and forth, coming from the really low end, feel so similar in this respect.
Re:Good idea - no need for new tool gimmickry (Score:3, Informative)
- A way to adapt the syntax. In Lisp, you write (+ 1 2), not 1 + 2. So if you have the semantic ability to represent concepts, you don't have the syntactic ability.
- A distinction between the program environment and the meta-program environment. When you create a lambda, its "namespace" is the current program. In Mozart, it needs not be.
Re:Good idea - no need for new tool gimmickry (Score:3, Insightful)
Re:Good idea - no need for new tool gimmickry (Score:2)
Lisp might have the technical capabilities to do concept programming. But based on my limited experience, concept programming is not in the Lisp mind set any more than for other languages. Rather than asking themselves how code could represent user concept, Lisp programmers spend their time finding ways to turn application concepts into lists or functions. Just like C programmers spend their time turning concepts into pointers and low-level malloc(). This is backwards in both cases.
The core of my (+ 1 2) argument is that this form is the natural representation of the concept in Lisp. The Lisp users consciously choose mathematical purity over diversity. Saying "everything is a function" or "everything is a list" is mathematically correct. But it is not true from a concept point of view. If you have any doubt, a comment from the package you referred reads essentially: 1+2 is equivalent to (+ 1 2), but much easier to read according to some folks (emphasis mine). This comment was most certainly written because it was expected that there would be a negative reaction to trying to mess with the sacred notational purity of Lisp. The comment really reads like "Eh, I'm not one of them guys who don't know the one and only truth!"
Lisp is both more extensible that C, and much more capable to digest new paradigms, like object-oriented programming. This is no coincidence. I see this is a proof by existence that concept programming can enhance the longevity and adaptability of the code.
Re:Good idea - no need for new tool gimmickry (Score:2)
What I am still saying that the fundamental idiom in Lisp remains lists and functions. Show me your code Prolog and music environments. If they don't ultimately generate lists and functions, you will have convinced me. If I can't take the "car" of one of your "Prolog" programs, or if it tells me "Can't do that, that's a Prolog program, not a list", then you have made a giant step towards convincing me.
The discussions with Lispers have convinced me that I really need to have an "XL vs. Lisp" page somewhere, and I will do that. It's hard, because Lisp can do everything, just like assembly. It doesn't mean that it's the right way to do it, though. Don't make the mistake to believe that I didn't know Lisp when I began XL. I did, and Lisp was a major source of inspiration from day one. But I am convinced that it is possible to do better than Lisp. I'd like to invite you to discuss your Prolog implementation and Lisp vs. XL extension mechanisms on the XL mailing list.
Re:Good idea - no need for new tool gimmickry (Score:2)
There is a difference between lowering everything to Lisp code and lowering everything to machine code.
and vectors and hashtables and symbols and clos objects and numbers and all those other primitive lisp types you disregard
I disregard them in that thread because Lisp doesn't use them for code representation, it uses lists. I'm pretty sure that the packages we discussed generate lists, not any of these entities.
Uh... (Score:3, Informative)
Re:Uh... (Score:4, Informative)
In AOP (aspect object programming) everything is a verb and a matter of flow. I.e. after doing one thing, you do another. Of if this action is taken, this must be taken. Sorta trigger driven.
In concept, everything seems to be more verb/adjective like. I.e. you wouldn't create a Max object or a Drive object or a Smell object in OOP. You'd create things that have a max() method, or a drive() or smell() method. You'd create the concept of smell() and prolly return something that describes the result of finding the max(), or driving() or smelling().
Re:Uh... (Score:2)
How's this differ from polymorphism as implemented in languages with type inference?
Re:Uh... (Score:2)
It's not to say you can't do it with OOP or AOP, it's just the syntax that is different. Just like you can do procedural or AOP within OOP.. typically you don't.
Re:Uh... (Score:2)
Now it's polymorphism and multiple dispatch. I recognize that every language is basically just syntactic sugar for either turing machines or lambda calculus, but I'm just failing to see what's revolutionary here and isn't a well-trod concept with a shiny new label on it.
Maybe that's what FP needs is a shiny new label, who knows.
Re:Uh... (Score:2)
Know what I mean?
Re:Uh... (Score:3, Informative)
Re:Uh... (Score:2)
I think I'm going to have to read up on this as it sounds interesting.
Re:Uh... (Score:2)
For instance, the concept of adding A and B can be represented by:
A+B: a built-in operation, the preferred form for C
Add(A, B): a function call, the preferred form for C++ except for built-in types
(+ A B): a list representation that can be evaluated, the preferred form for Lisp
A.Add(B): a method invokation, the preferred form for SmallTalk (written A + B in SmallTalk)
All of these concept representations have subtly different semantics. What does that mean with respect to how well the concept is represented? This is a question that good programmers always ask themselves implicitly, and that concept programming makes explicit.
Re:Uh... (Score:1)
Slight nit, but this is the basic form in non-OO languages and braindead OO ones like Java. In C++ you just overload the + operator and write A + B like you would expect. At the compiled level, yeah, it's a function call (and you can write it as A.+(B) [or something like that] if you really want to).
To me, the concept of addition is best represented by A + B, mainly because that best reflects the way we inherently think of the operation.
Re:Uh... (Score:3, Informative)
True, but not every concept is easily represented as an object.
The 'Concept Programming vs. Objects' [sourceforge.net] page explains how concept programming relates to OOP.
Re:Uh... (Score:2)
There we have module or global functions (methods), in order to represent something like "max".
I mean, it's trival to find "things" which are _not_ well represented by "objects", that's why pure object oriented languages are not soo omnipresent as not-so-pure ones. Every mathematical function comes to mind etc., do we really need a new paradigm for that - and if yes, why don't we just call it "not-so-pure-OO"?
Re:Uh... (Score:2)
Today, we do this implicitly, a bit like experienced programmers used function pointers in C to do some unnamed sort of OO.
What if you make it explicit? Then you realize that any tool or language has built-in limits, but that we learn how to workaround them, and then make it part of our mental model. That is a Bad Thing.
The reason for the Maximum example is that I don't know of a good way to write it in Java. I know of good ways to write it in many other languages. But in Java, all approaches add a lot of useless noise and concepts just to bolt it on one of the few Java paradigms. The same is true in C or C++ as well. That is undesirable.
The same limit exists for more complicated concepts in any non-extensible language. Lisp or Python are somewhat more extensible than C or Java, but you can still hit the ceiling pretty easily. In Lisp or SmallTalk, for instance, it would be doing math-intensive work, because you don't want to write (+ 1 2) when 1+2 is shorter and nicer.
Re:Uh... (Score:2)
I don't want to do language evangelism, but I'd be interested in what is wrong with python. I know it sounds silly to insist on that, but my next question would be why you don't just use python, because I find nothing on that site (didn't read everything, though) which python couldn't do.
And here [slashdot.org] you give an example about C++ which is a critique against the language, not the underlying "paradigm", AFAIK.
Re:Uh... (Score:2)
Nothing is 'wrong' with Python (or C or Java for that matter). Actually, IMHO, it is a very good language (clean syntax, clean semantics). What is wrong is when you use for what it was not designed for.
Here are a few extreme examples: would you use Python to describe a page (using it like HTML)? Or to do numeric-intensive computing? For real-time embedded applications? To facilitate your coding of a "memcpy()" implementation in a difficult assembly like Itanium? If not, why not? The reasons are probably not the same in each case. Note that for all these examples, I think that XL could perform very well, given the right "plug-ins".
To use a less extreme example from the web site: if a good fraction of your application domain was expressed using the derivative notation, how would you automate the job of turning these application derivatives into code in Python? I'm not saying that you can't, I'm really asking the question. My limited knowledge of Python doesn't indicate that it would do that very well, but I might be wrong. I'm always ready to steal ideas, you see
Re:Uh... (Score:2)
Well, that's right with any tool
Note that for all these examples, I think that XL could perform very well, given the right "plug-ins".
Well, python is extremely friendly to other languages, esp C, C++, Java. See for instance Extending and Embedding the Python Interpreter [python.org] . So, IMO python lends itself very well to numeric computing (see also the numpy module etc.), as it's easy to first write the code and later implement critical sections in a compiled language.
For realtime embedded applications, see for instance this thread [google.com] on comp.lang.python. Btw. there are a lot of very friendly and competent people which are always interested in language wars^H^H^H^Hcomparisons. I bet if you asked your question about the derivative notation (which can't be translated literally to python, I'm sure), this might lead to interesting discussions.
Re:Uh... (Score:2)
It's not about language comparisons. You can apply concept programming principles in Python. If you decide to call a method "Draw" rather than "Vhhlqkwz", it is because you apply concept programming implicitly: "Draw" is a better program representation of the concept than "Vhhlqkwz", at least for English readers.
Re:Uh... (Score:2)
I read through that page but I fail to see how concept programming should be any better than OO programming. Sure, with C++ you cannot easily specify methods that take lists but it's not because of OO but because of C++ limitations. IMO, max() should be method of a list or a group and C++ can do that fine with templates. Does anybody grok the Zen of Concept Programming? Is it really any better than OO programming?
Re:Uh... (Score:2)
Are you suggesting that:
x = (new List<int>).Add(1).Add(2).Add(3).Add(4).max()
is a better representation of the concept in your code than:
x = max(1, 2, 3, 4)
?
Re:Uh... (Score:2)
x = (new List).Add(1).Add(2).Add(3).Add(4).max()
is a better representation of the concept in your code than:
x = max(1, 2, 3, 4)
If you have code like max(1, 2, 3, 4), a simple bugfix would be replacing it with simple 4. Usually, if you have to get max() of a group (unordered) or list you first generate the group or list somehow. The concept here is that the list has the required method for finding the maximum value instead of some random procedure somewhere else.
So the code becomes:
/* thelist is of type list and is initialized somewhere */
m = thelist.max()
If you hardcode the list's contents you already know what the maximum value will be. No need to compute it runtime.
Re:Uh... (Score:2)
Let me follow your suggestion for a moment. Consider now something like Max(BUFFER_SIZE, CACHE_SIZE), where BUFFER_SIZE and CACHE_SIZE are two implementation-defined constants.
- Why do you need a list?
- Why do you need a method call?
- Why do you need dynamic dispatch?
- Why would I need to box BUFFER_SIZE as an Integer?
- Why should anybody have to implement a "LessThan" method wrapper?
- Why should I do the computation at runtime, when, as you pointed out, the compiler (but not me) should trivially be able to compute it, for any given platform?
- Why should I change my code if CACHE_SIZE happens to be sampleSize, a variable?
- Why should I change the form of my code if I compare three entities and not two?
- Why should I change the form of my code if the compared entities are not integers but strings?
The answer to all these questions is "Because of Java", not "Because of the application concept". That is the problem.
And now, the more important, higher-order questions:
- Is this an unrealistic example, something that nobody would ever need? In other words, is your "Usually" justified?
- Do you really think that this problem occurs only with Max, or am I trying to illustrate some general class of problem? Since it is obviously the latter, why try to fix my Maximum implementation? Are you trying to avoid the real issue?
Re:Uh... (Score:2)
Re:Uh... (Score:2)
One nit: the implementation probably allows comparison between objects A and B of different type, as long as you can compare them. So this is not exactly the same concept, but it's close enough.
On the use side, nothing to say. The notation is natural. It is also the same as the one I suggested
On the implementation side, I notice that you did not show the first line of the help, which states "built-in function". Why is it a built-in? Can't you write something like that in Python?
Another nit: the use of __cmp__ instead of < is not very convincing. The < concept is mapped in a relatively unnatural way.
A more serious issue: you made __cmp__ a member of the "my_obj" class. This is, I think, the most natural approach in Python. Back to application space, < is not defined based on the type of its left operand. Why does it matter? If you write A < B, you don't expect the left and right to behave differently with respect to types. In your approach, they do. If I write max(A, B), it might work if A has a __cmp__ even if B doesn't.
So the implementation is useful and practical, but concept programming still has some interesting things to teach us about it. All of these comments have interesting applications in terms of long-term maintainnability. This is left as an exercise for the reader
Re:Uh... (Score:2)
built-in in python doesn't mean that you can't implement it in python, it just means the you doesn't need to import a module to use it. It would be trivial to re-implement it in python, I just didn't do that.
Another nit: the use of __cmp__ instead of < is not very convincing. The < concept is mapped in a relatively unnatural way.
I can't follow here, maybe you misunderstood me, maybe I don't understand you. I don't use __cmp__ instead of <, at most I use cmp instead of <. But in python. I could have also written .
To show a more esoteric example: The brackets around the ints are there to signal that the 4 is to be used as an instance of its type (class), so we can get at its methods - don't ask me why, this would not be needed with strings etc.). Note that since version 2.2, there are more special comparison functions to distinguish every possible comparison (=,!=, <, ), and that if the left type doesn't have the right method, the right side might have the reflected version (i.e. instead of >), so this is looked up on the right side, and used if found.
And yes, you can destroy the mathematical meaning of these operators with this power, but that's life, as long as you don't deliver a __cmp__ function which fdisks the hard disk as a side effect
Nice (Score:1, Funny)
Sounds like... (Score:2)
Not very well-explained nor convincing (Score:4, Insightful)
The first example discusses the concept of "Maximum", and shows how you would implement that concept in Java, followed by the allegedly superior XL way to do it. The Java "class" makes no sense, and really would not be the way to go about it. YOu would never want to model the concept of Maximum in that way, but if you did, you would use the already-existing Comparable interface and creating a static method called "Max" of some class that takes a list of comparable objects.
Furthermore, in C, you can model it exactly as they have, since C allows multiple arguments.
The next example was discussing takinga derivative and how you can translate some incorrect Java syntax that takes a derivative into the Java equivalent. Why not write a method to do this? What is to be gained by using a non-standard syntax? It makes it harder to write (you have to learn something in addition to Java), and harder to read (same reason).
As for the XL language, and the notion of Concept Programming, it just wasn't explained well at all, and left me saying "what's the big deal? What does this buy me? Where is a real example?" Not every program (dare I say not many programs) are based around mathematical equations and operations. Most involve executing some logic based on input and spitting out output. Modelling that as math seems really counterintuitive (and not in-line with the "concept" of your domain).
Ultimately, seems like some typical academic wank-fest that someone can use to get their Ph.D., but not very applicable in the real world.
Re:Not very well-explained nor convincing (Score:1)
I agree that the Java implementation was idiotic at best. One would think that the author was somehow convinced that "maximum", being a noun, would therefore be implemented as its own object in an OO design. Of course, competent OO programmers do not merely select random nouns from sentences to formulate their design, because doing so leads to backwards implementations, as the author has demonstrated. Unfortunately, because the implementation chosen in the Java example was so unrealistic, and the implementation differences are the basis for the comparison, the entire comparison between Concept Programming and OO Programming is worthless.
I find it interesting that the "conceptual" implementation looks suspiciously similar to C++'s std::max_element(). In fact, it almost seems like the comparison is really between Generic Programming and OO Programming. I think that page needs to contain better examples to illustrate its point.
Re:Not very well-explained nor convincing (Score:2)
Several comments were along these lines. Now a challenge for all you Java experts who discuss my sanity
Now that you thought about something, ask yourself:
- How large is it? What fraction of the code is useful code, what fraction is syntactic or semantic glue?
- How difficult is it to use? How much code for the users is irrelevant glue?
- How restricted is it? Does it work for all types? Does it work for "int", or do I need additional "helpers", like boxing/unboxing? Can I add arbitrary classes? What does it mean for my code if I need more than one "less-than"?
- How efficient is it? Did I just create 5 objects, two lists, and invoked seven method calls just to compare two numbers?
- How easy is it to document? Does the structure make sense, compared to the original "Max" idea? Could you explain how it works to your grandmother?
Here are the approaches that I know about:
- Having static functions like Max(int, int). This works for a limited number of arguments and a limited number of types. So it soon becomes very verbose. It is still the best method in many cases.
- Having an interface like Comparable with a method like "Less". This means that I need all sort of ugly boxing, it doesn't work with the natural less-than operator, and it is a bit inflexible, for instance if you want more than one meaning for "less-than".
- Passing lists and comparators around. It is very inefficient, and the caller has to use an unnatural interface.
- A few less interesting solutions, like passing a string, or having a Max class where you accumulate values using one method and produce the result using another.
Please post about any other idea you might have.
Re:Not very well-explained nor convincing (Score:1)
The comparator/container approach is probably the best way to implement this "concept". I agree that the Java solution is more verbose than the XL solution, but I am not convinced that this matters much. Java tends to be verbose, but it is still a very easy and readable language.
"Concept Programming", as I understand it, mainly focuses on extensibility. The web site shows a derivation example, where the language has essentially been extended, allowing it to "properly reflect the concept" of derivation in the source code. While the concept of an extensible language is somewhat interesting, I don't think that altering the fundamentals of a language to suit the current problem is a great idea. Doing so makes the common programmer a language designer, and raises issues concerning complexity, maintenance, and readability. Elegance is nice, but it doesn't necessarily help programmers do their job any better.
Re:Not very well-explained nor convincing (Score:2)
Concept programming doesn't focus on extensibility. Extensibility is a consequence, not a goal.
Re:Not very well-explained nor convincing (Score:1)
Verbosity is not an effective measure of complexity. *cough*Perl*cough*
I am glad you responded and updated your web site, because I feel like I have a better understanding of "Concept Programming". I still don't understand exactly what methodologies are considered "concept-oriented". I understand that, according to "Concept Programming", program code should represent application concepts, but what programming techniques are utilized to achieve that goal? Is it nothing more than a goal? For "Concept Programming", it almost seems like a programmer would have to use a language that supports every possible programming construct with a syntax that is simple and intuitive to the domain. To do this, one would probably require an extensible language. It sounds like extensibility truly is a goal, or at least a requirement to reach your goal.
Re:Not very well-explained nor convincing (Score:2)
Because taking a derivative is not a function. It's a metafunction; it is applied to a function and returns another function. For example: the derivative of sin(x) is cos(x), for all x. This can't be implemented as a method and so using method syntax would be nonsensical.
I agree that I just didn't get the point of the Minimum example; but the meko thin-tool application (which is used for the above transformation) looked really cool. Program transformation is a hot topic in the functional programming world; I'd like to see what it can do in the procedural world.
Oh, yeah, and all programs can be modelled as a function; all Turing-complete languages are equivalent, so a program can be transformed from one language to any other, including the purely functional languages. This is the entire basis behind a lot of computational theory, and so their mathematical representation makes a lot of sense.
Re:Not very well-explained nor convincing (Score:2)
That's not exactly true. It just depends on how you model it. In any case, you either are creating a generalized derivation toolkit (which you could create in Java or any language), or you do some combination using lookup tables.
Methods in Java are objects, so you are free to pass them to other methods. Since everything is an object, including methods and classes and variables, you can pass a function to a function and have it return a function. This is also trivial in C.
The point is that the explanation doesn't really give much motivation for Concept Programming, other than as an academic exercise.
That transformation tool looks like a souped-up pre-processor (which C has had forever), and not only does it not seem terribly ground-breaking, but it seems like a generally bad idea if you are wanting to produce production-quality maintainable code, especially when the language in the example (Java) already has powerful facilities for you to model your solution without relying on some new syntax and translation step.
Re:Not very well-explained nor convincing (Score:2)
Oh, I'd like to see you try! Since this is so trivial, please give me a definition of a method or function "derive" in either C or Java so that I can write, for any function G, something like:
function F = derive(G);
In C++, you can do it for some special forms of G, using template meta-programming. People have written articles about it. It takes thousands of lines of VERY hairy C++ code. In Lisp, you can do it. In Java or C? I don't think so.
Re:Not very well-explained nor convincing (Score:3, Interesting)
You missed the point of the examples. They are not about syntactic sugar in the XL or Java++ implementations, but rather in the semantic limitations that non-extensible methodologies or tools impose on us. You prove the point further with your examples, by sticking to reference models even if they don't work very well.
The first example discusses the concept of "Maximum", and shows how you would implement that concept in Java, followed by the allegedly superior XL way to do it. The Java "class" makes no sense, and really would not be the way to go about it. YOu would never want to model the concept of Maximum in that way, but if you did, you would use the already-existing Comparable interface and creating a static method called "Max" of some class that takes a list of comparable objects.
In order to implement the concept, you have added a lot of noise in the implementation, which includes not only the Comparable interface, but the need to use methods rather than operators. Your method needs to take a List of objects, so you have added noise at the call site as well. None of this code is actually useful in representing the original concept, it is some artificial complexity to stick to the OO model at any cost.
Furthermore, in C, you can model it exactly as they have, since C allows multiple arguments.
But the C vararg interface is not object-oriented, so this doesn't invalidate the point. As an additional note, it is not type safe either, So you need to add more noise to pass the type information, again both in the implementation (ever looked at the code for printf) and at the call sites (beauties like "a=%s b=%d")
The next example was discussing takinga derivative and how you can translate some incorrect Java syntax that takes a derivative into the Java equivalent. Why not write a method to do this? What is to be gained by using a non-standard syntax? It makes it harder to write (you have to learn something in addition to Java), and harder to read (same reason).
As another poster noted, this is a meta-operation, which transforms a concept into code. You can't write a method that does that, unless your method is part of the compiler. Please show me how you would code this in Java.
Ultimately, seems like some typical academic wank-fest that someone can use to get their Ph.D., but not very applicable in the real world.
Isn't that exactly what they said about OO when it first came out? BTW, I don't have a PhD.
Is it useful? If you ever wrote a perl script that scans your source code, or ever used some preprocessor, or something like that, then you have hit the limits of your tools, and concept programming tools would have helped you. On the other hand, if you never did, then you are not a Real Programmer (TM)... I can literally list hundreds of examples of such additional "build tools" in common Open Source projects.
Re:Not very well-explained nor convincing (Score:2)
it's more than a wankfest. He has a startup based
on what is, in the abstract, essentially the same
basic idea.
But I must disagree about the deficiences of Java.
It's all about style. Admittedly, the lack of
varargs-alike syntax in Java is a bit annoying, but
it's trivial to work around:
ComparableSubclass object =
ComparableSubclass.Maximum(makeList4(a,b,c,d));
Re:Not very well-explained nor convincing (Score:2)
And then, you totally miss the point. It's not about Java not having varargs, it's about the fact that not having varargs forces you to workaround. The resulting code no longer maps correctly to the application concept. It generally contains a lot of noise.
You only showed a fraction of the total noise with your example. makeList4 looks simple, but how many of these do you need? Do you need makeList27? For floating-point arguments? It gets very verbose by then...
And then, the only reason it is trivial to workaround is because I chose a trivial example to make my point. Take something more complex than maximum, and you have real problems. Compare the Java and Objective-C of the MacOSX frameworks to see what I mean...
Re:Not very well-explained nor convincing (Score:1)
Personally, I think of the "power" (ie. expressive power) of a language metaphorically as in physics: power = work / time. In this case, it's how much "work" a program can do divided by the amount of developer "time" it took to write the program.
I think it's likely that we are not even within an order of magnitude of the power that languages will eventually possess.
To me, syntactic sugar is an equivalent syntax for some construct that has no value but to save some finite number of keystrokes every time it is used. An example of this is infix operators in OO languages, where we write "3 + 4" rather than "3.plus(4)". (So, clearly, syntactic sugar can be important, since I don't think people would willingly use a language that mandated the latter notation.)
In contrast, working around the lack of varargs in Java the way your parent post described would require an infinite number of keystrokes every time you do it, because you need an infinite number of makeListXXX methods in order to be truly "equivalent". Therefore, varargs is not just syntactic sugar for the makeListXXX notation in my books; it is fundamentally more powerful. And, even if you could write enough methods to make them essentially equivalent, the time required to write all these methods makes Java, by definition, less powerful when it comes to varargs.
trying to keep an open mind... (Score:2, Insightful)
Projects fail not because of language defects, but because people insist on using the wrong tools. Projects succeed when people use the right tools, not because of the features of these tools.
I'm not against this point of view, because I have problems thinking for myself
Isn't this the idea behind a rich OS, like a *nix?
I have all these languages to choose from, that I can glue together many different ways... I don't have to nail myself to A solution, and thereby A drawback.
I can get the drawbacks of everything, all at once!
Maybe I'm sadistic, but I like having a lot of tools in my toolbox. I like that if I don't understand a concept in a language, I'm not shut out of the programming flock forever.
I knew I should have just posted something funny and hung up. Coffee...
Market Programming (Score:1, Funny)
It is designed to make it easier for people who don't have programming experience, but who do work in the Marketing Department, to develop applications indepedently of a technology or programming staff.
Market Programming requires a voice-recognition element, because people from Marketing Department are (seemingly) much clearer when speaking, but the meaning of their thoughts are completely lost when committed to the written word are able to systematically analyzed.
The process amounts to a Marketing Expert speaking into the microphone, or to an individual that the Marketing Expert will treat like a microphone. For example, he might say, "I need a robust, multi-tiered, fault-tolerant, enterprise-class, innovative, xml, j2ee, turn-key, hands-off solution ASAP."
At this point, the Market Process begins.
Step 1: BLORK!
Step 2: Marketing Expert double-clicks on the setup.exe icon to receive and implement the solution.
NOTE: Step 1 may take a while. Please be patient. The process will initially report the completion time to be 3 months, but it may eventually take 9 months to complete.
Prove It! (Score:2)
I'm not about to say this or any of the other ideas referenced are bad, at least not always. It just seems like an idea that needs more proof, evidence, or something beyond an opinion.
In physics, chemistry, and other hard sciences, ideas are submitted to far more scrutiny than many of todays ideas. We can come much closer to proving the ideal gas law than theorists have bothered to do with design patterns, OO, concepts, extreme programming, etc. Too often in computing today, an idea is "proven" when some analyst, poohbah or Slashdot has referenced it.
Since there really isn't any hard math, evidence and such, the idea is much closer to opinion than fact, and as such won't last very long.
I'd submit that the best approach to programming is to hire good people, and keep them once they learn the product. Give them a real budget with realistic deadlines. Listen to the developers. Think twice before shoving methodology dujour onto the development team. A good team probably can function more efficiently and effectively without unsubstantiated opinions. When something is *really* substantiated, then developers should and would listen.
*End Rant*
Re:Prove It! (Score:2)
Two hard data points:
- By replacing a "wrong" concept (pointers) with a "better" concept (input/output flags on arguments) for parameter passing, XL shows a 70% improvement over C++ for parameter passing (the detail being, because it avoids the loads and stores, and does everything in registers). This particular fact had been documented previously for languages such as Ada.
- By replacing a "wrong" concept (hierarchy of classes with left-shift operator) with a "better" concept (type-safe, variable argument lists), XL shows a 7x improvement over C++ for text I/O.
Your turn: prove that having the freedom to select the right tool for the job is a bad thing
Re:Prove It! (Score:1)
70% improvement in what? Kittens petted per minute? And is that an increase or a decrease - it rather depends on how you feel about kittens after all.
A 7x improvement over C++ for text I/O? Again, what are you measuring? Time spent correcting programmer errors? Time spent in meetings deciding a standard syntax for text i/o functions? Time spent by the compiler building the machine code?
What?
Re:Prove It! (Score:2)
Re:Prove It! (Score:1)
Still, you should be aware of the famous
Impressive (Score:2)
Mozart is a poor choice of name, unless it is very very very old - because it is already taken by a programming system (see the Oz language [mozart-oz.org]).
Also, check out Pliant [pliant.cx] which is (relatively) mature, and does most of what is discussed on the site and a whole lot more.
Silver Bullet (Score:2)
Don't you love simple, two-word statements that the majority of your audience understands to mean "someone's become fixated on something that will magically 'fix everything'"?
I could make exactly the same type of post about any technology or approach.
When trying to make people aware of an approach that they do not know, do *not* use words such as "amazing". They simply obscure any real information, and turn off precisely those people who it may benefit.
If I were to run your post through spambayes [sourceforge.net] I suspect it would classify your post as spam on my machine. Although it may have enough useful markers to counteract the obvious spam markers.
Shark Sandwich (Score:3, Insightful)
From the REALLY LIMITED amount of information about "concept programming" on the linked site, it appears that the author REALLY REALLY wants to use higher-order functions (a la Scheme or Haskell), but he just doesn't know it.
Re:Shark Sandwich (Score:2)
From the web site:
Re:Shark Sandwich (Score:2)
Notice that there is nothing in this description that says anything about what concept programming is. It just vaguely asserts that abstraction is good, which is hardly news.
Re:Shark Sandwich (Score:2)
So that is obvious? Well, let's see Hello World in C++:
#include
int main() {
cout "Hello World";
}
And now let's see Hello World in Basic:
PRINT "Hello World"
Which one is closer to the "application concept" in that case?
Hence Observation #1: Concept Programming sounds easy and obvious, but we just don't do it.
Now, I can write a "print" in C++ and make it look almost like the BASIC one. So no big deal, right? Well, I can certainly do that for simple stuff, but not for complicated things. You cannot write in C or C++ something that is functionally equivalent to WriteLn in Pascal. In XL you can. You can't write a tasking package that gives you the kind of high-level constructs found in Ada. In XL you can. You cannot have the C preprocessor or compiler compute a symbolic derivative for you. In XL, you can.
So that is Observation #2: For a given tool or language, there are concepts that I can write, others that I can't. They depend on the tool you select.
This brings me to Question #1: Can we write programming tools where you can talk about any concept?
And this is Proof By Implementation #1: XL is a language that is more extensible than even Lisp. I don't know if I can represent any concept, but I can represent using a single language models that up to now were natural only in a few "specialized" languages like Ada, Prolog or Lisp.
Oh, but this adding abstraction, so it must be slow, right? Well, no.
Observation #3: The choice of concepts defines the performance you get.
Observation #4: Higher concepts sometimes enable optimizations that would be impossible without the compiler knowing what it's talking about.
Proof by Implementation #2: By applying these techniques, XL already has demonstrated on a few "toy", but common examples significant performance improvements compared to languages generally considered "fast" like C or C++.
Is that enough to get you interested again?
XL vs. Concept Programming (Score:2)
If concept programming really means anything, then you need to sit down and write up a detailed and exact description of what this is. If your description takes less than 5000 words, go back and see what you left out. Most importantly, don't refer to XL or any examples written in XL, as these will just confuse the issue at this point.
After you have defined concept programming clearly and exactly, then you should write a paper on XL as an example of a language designed for concept programming. Be sure to explain exactly what you mean when you say that XL is "more extensible than Lisp". You should also compare and contrast XL with languages such as Scheme, Haskell, Dylan, and Smalltalk. Do NOT mention performance or optimization in this paper AT ALL. XL is a language, and languages do not have performance characteristics. Performance is a property of an implementation.
Finally, when you've defined all this stuff, you can talk the performance of code generated by the current implementation of your compiler. This document is the least interesting, because, if you are working diligently at it, performance of the next version of your compiler should be better, and so on. So this document will change rapidly, while the first two should change either very slowly or not at all.
Sorry to be pedantic. I'm trying (now) to help you explain (to me and everybody else) why concept programming is something we should be interested in.
Re:XL vs. Concept Programming (Score:2)
What is the part you don't understand in "Concept programming is the idea that application concepts should be reflected in program code" ? That's much less than 5000 words.
Based on the information on your site, concept programming could be anything at all.
This was the "Is it trivial?" objection in my previous reply. It sounds trivial, and we think we do it. In practice, we don't, that's my Observation #1 above. Anybody can, by looking at the C++ vs. BASIC code, and applying the concept programming one-line definition, identify that C++ has a worse "mapping" of the concept than BASIC for that example. You want a Lisp or Haskell or functional languages example? What about the concept of "addition"? Do you think that writing (+ 1 2) is the correct way to represent the concept that everybody on the planet writes as 1+2 ?
Why can't I just "forget XL for the moment"? Because for some simple examples, XL is the only language where I can express the concept at all. Maximum is a good example. I know of no language which lets you define it "perfectly". Give me any language (and I know a few at least superficially) and write Maximum in that language, and I can probably point out why it doesn't behave enough like the mathematician's concept called "Maximum".
After you have defined concept programming clearly and exactly, then you should write a paper on XL as an example of a language designed for concept programming. Be sure to explain exactly what you mean when you say that XL is "more extensible than Lisp".
I gave a clear and exact definition of what is concept programming. I wrote a web site and 50000 lines of code instead of a paper, precisely showing XL as an example of language designed for concept programming as you suggested. As for the "more extensible than Lisp", I think you could figure it out if you actually spent the time to read and understand the web site. Ask yourself if Lisp could really cover real-time programming or numeric computing? And if not, why not? I know the answers, and I have written them on the web site (or at least I tried to...)
Do NOT mention performance or optimization in this paper AT ALL. XL is a language, and languages do not have performance characteristics. Performance is a property of an implementation.
Based on several years of experience as a compiler developer, I can tell you that you are very wrong. Languages have performance characteristics. One contribution of my XL performance benchmarks is to illustrate why: because a wrong concept can prevent the compiler from doing the right thing. In Java, one "wrong concept" is the stack-based GC JVM design (which can be contrasted to the Taos/Elate virtual machine design, which did not have similar performance drawbacks, at least theoretically). In C and C++, it is the over-use of pointers which causes something called "aliasing". Lisp, Haskell, Scheme, Dylan and Smalltalk share a single performance-impacting "wrong concept", namely their reliance on a single data structure (or a few of them).
Sorry to be pedantic.
You sound more like someone who has been brainwashed by the Lisp school
And to validate my claim that if you had explored the web site, you could have found answers, I quote this page [sourceforge.net]:
Re:XL vs. Concept Programming (Score:2)
Re:XL vs. Concept Programming (Score:2)
The output is:
Yeah! One out of 4! No compiler warning!
Bzzzt, you lose
If I was nasty, I could also insert a few snide questions her about the deep application concept represented by the subtle difference between $_ and @_ in your code...
Re:XL vs. Concept Programming (Score:2)
Incidentally, the difference between $_ and @_ is a matter of syntax, and every programming language, even your beloved XL, has syntax that has to be learned.
Re:XL vs. Concept Programming (Score:2)
Good point. The XL Maximum example [sourceforge.net] makes this requirement explicit and verifiable by the compiler. The Perl code you submitted doesn't.
I was using the partial ordering function <, which in Perl orders numbers numerically and orders strings as being equal with 0.
So you are saying "the problem is not in my code, but in the Perl < concept which doesn't match the mathematician's <. Why did you decide to use Perl's < then? By the way, your definition of Perl's < is somewhat incomplete, you could have "use overload" or other operator redefinitions too.
If you have some other partial ordering function you'd like to to define, I can easily replace the calls to < with calls to some user defined less_than function that does whatever you want, and the code will always return a maximal element (x such that for all y, it is not the case that less_than( y , x )).
I don't need a somewhat pedantic definition of what a partial ordering is
Of course, you'd rather just say I'm wrong than properly define Maximum or notions of well-defined orderings, so just go ahead and call me wrong because my code didn't do whatever it was you had in mind.
I gave a relatively simple and precise challenge: write a Maximum that resembles a mathematician's understanding of Maximum, using any language. Better yet, I also gave you an example written in XL as a reference. It specifies very precisely "what I have in mind", which includes comparing elements of the same type, in any number, using only <, and rejecting any type for which < doesn't return a boolean.
You proposed some Perl code in response. It is my duty to simply point out that your code fails to behave like the mathematician would expect it to. Not because "Of course, I want to prove you wrong" , but because it accepts nonsensical input (from the mathematician's point of view), such as Maximum(1, "abc").
Incidentally, the difference between $_ and @_ is a matter of syntax, and every programming language, even your beloved XL, has syntax that has to be learned.
But the syntax Perl imposes is not exactly close to what the mathematician would expect. Perl mandates the correct understanding of the difference between $_ and @_, which is totally irrelevant to the mathematician. On the other hand, XL tries very hard to ensure that any element of the syntax, if arbitrary, is also necessary. Try to look at the example [sourceforge.net] and find something that you can remove without changing the definition in a way that matters from the mathematician's point of view... I think the only irrelevant item is "C", the name of the boolean variable used for validation of "ordered". All other symbols might be replaced with some equivalent symbol, but not removed.
Re:XL vs. Concept Programming (Score:2)
> less_than stuff if all I want is to compare two
> strings?
Because it's an application domain concept?
Perl is an extensible language. When you need
a concept that's not defined in the base language,
you write an extension.
Re:XL vs. Concept Programming (Score:2)
Ahh, but wait, how does XL know what to do with the < operator? Maybe it works built-in for strings and numbers, but what if I wanted to compare two arbitrary objects? I suppose I'd somehow have to make sure that my objects match the generic type ordered of your max/min code. How would I do that if not by writing a less_than function or overloading the < operator?
By the way, I don't understand the generic type ordered in your code at all. What I keep reading is "Something (what? two objects?) is of the generic type ordered if you have two ordered objects and can find out which of them is smaller," but this obviously doesn't make sense.
The most readable way of writing down the concept "ordered" in a somewhat formalized way somewhat similar to XL's syntax would imho be something like:
i.e. two objects A and B are ordered if the < operator is defined for these two objects. But that would be just like having an interface "comparable" in an object-oriented language.
Finally, let me say that I really do like some of the ideas in XL. I will certainly take a closer look at it when I have time. One thing I think you should consider changing is the name "concept programming." Look at all the confusion it generated in this /. discussion :-). What you really mean is simply "expressive programming," i.e. the goal of your project is to enable developers to write more expressive code, as opposed to e.g. forcing them to make everything an object and to sometimes write very verbose code like in Java-the-language. Of course, "expressive programming" is not a new idea, so you'd lose your new-buzzword marketing advantage ;-)
Re:XL vs. Concept Programming (Score:2)
How does XL know what to do with the < operator? There are operator definitions. For instance, less-than for integers is declared as:
function CmpLT(integer A, B) return boolean
written A < B
I invite you to look at the real thing in the CVS tree. A pragma connects this declaration to the implementation, which is in this case a built-in operation of the compiler back-end. The same mechanism, but generally without the pragma, is used for programmer-defined types.
The way I keep reading it is... but it obviously doesn't make sense
It turns out that you read it right. You are telling the compiler what an ordered entity is. Why doesn't this make sense? To implement "Maximum", this is exactly what the compiler needs. It allows the compiler to reject incorrect uses of Maximum, for instance. If you ever passed a wrong argument to a C++ template, you will understand why this is useful.
What you really mean is simply "expressive programming" [... which] is not a new idea
"Expressive programming" doesn't carry the important notion that your reference is the application domain. A pointer in C is expressive, but I still have problems with a pointer used where the application concept is not that of "reference"... for instance to return values from a function. This problems comes from the concept mismatch, not from whether the code is expressive or not.
Also, I'm not trying to do something new, but to do something useful. Some people are doing concept programming today, some are using the terms in a different sense. What matters is that there is a lot of code written today where "my" concept programming shows problems. As long as such code exists, I'll keep trying
Re:XL vs. Concept Programming (Score:2)
> then in C++ and Fortran, you don't have to...
You don't have to in Lisp either, since it has
already been written.
In fact the only reason you don't have to write
one in C++ and Fortran is that it has already
been written. It happens to be part of the
compiler front-end, but that's irrelevant to the
point that someone had to write it.
> Using Lisp for such projects is, in most cases,
> bigotry.
It's using a general tool for a specific job.
That fact that it is such a generally applicable
tool is what makes it so valuable.
Re:XL vs. Concept Programming (Score:2)
eval("(max 1 2 3 4)");
If there is something you don't like in this picture, you might start to "feel" concept programming.
Re:XL vs. Concept Programming (Score:2)
> and you'll end up with a lot of useless effort
Not really. You end up with a lot less effort
than using Prolog, because you get all of Prolog's
functionality, plus direct access to a more general
underlying framework. See, for example, Schelog [neu.edu]
or Poplog [poplog.org].
Hey, why reinvent the wheel? A dog might not be
able to walk past a tree without pissing on it, but
I would hope that a software developer could do
better than that.... Call me a cock-eyed optimist.
Re:XL vs. Concept Programming (Score:2)
And it's a meaningless statement that tells me nothing. Most modern middle-level and high-level programming languages make it possible to reflect application concepts in program code. Even an old despised language like COBOL (which was designed from the ground up to be readable by PHBs) satisfies your definition of "concept programming". And it's more difficult to pull off, but even C code can have this property.
I think what you're maybe trying to say is that program code should only reflect application concepts and nothing else (such as the other stuff that creeps in regarding types, memory allocations, etc.) This is probably the biggest part of a programming language's "personality"- whether it is low-level and best suited to working in the machine domain, or high-level and good for working within an abstract domain. No language seems to do both well, although most at least try.
But think about what you're doing. You're telling a computer how an application should behave in order to interact with some problem domain that is completely foreign to it. Doesn't it make sense to use a language that lets you specify behaviors in both domains? It's nice to do programming entirely in terms of the application's own semantics, but at some point somebody or something has to generate the machine instructions. If I'm using a language that wants me to forget the underlying machine details, I might write some pretty weird programs. At the very least, I'm putting a lot of trust in the behavior of the compiler.
Garbage in... (Score:2)
The trouble with all this conceptual stuff is that unless you can get a decent spec, or, at the absolute minimum, useful feedback from the end user, all you end up with is a conceptually elegant and/or implementationally efficient useless program rather than an inelegant and/or inefficient useless program.
In my experience, being tied into the wrong programming metaphor, or the wrong object set, or whatever, is far worse than spagetti code. At least with spagetti, all options are equally possible and equally bad, or good. Once you start having to fight your metaphor or object model, you end up with kludges that make Fortran look like a really beautiful thing.
And, again in my experience, the only way to get a decent spec is to write the code at least once, wave it at the user, list all the wrong assumptions, throw all the code away and start again. Which is not something coders like very much. But wasn't it that guru guy from Xerox (Kay?) who said that the best thing for software integrity was frequent fires in the backup store?
Web site update (Score:2)
Puzzled... (Score:2)
Re:Puzzled... (Score:2)
If the main concept of your domain is based what Lisp-ers call M-exprs (infix notation for math), you should not be forced to use S-exprs (the prefix notation). In the case of Lisp, you can add a macro to use infix notation, so it is more of a strong encouragement, "we know what is best for you, my friend". But the default syntax for that particular application is close to inane. The default semantics is also inappropriate for that domain, by the way, see http://www.ai.mit.edu/docs/articles/good-news/sub
Another example: Perl has special notation and semantics for parsing regexps. If you want to parse text, Perl will be better than Lisp: concepts are represented in a more concise way, with tons of implicit semantics like $_ which make a lot of sense in that context. Yet to do general purpose programming, the syntax of Perl is inane at best.