Lightweight Languages 189
Denise writes: "'What happens if you get a bunch of academic computer scientists and implementors of languages such as Perl, Python, Smalltalk and Curl, and lock them into a room for a day? Bringing together the academic and commercial sides of language design and implementation was the interesting premise behind last weekend's Lightweight Languages Workshop, LL1, at the AI Lab at MIT.' Simon Cozens report, on perl.com, says it wasn't the flame fest you might have imagined."
Ruby (Score:2)
Re:Ruby (Score:1)
Re:Ruby (Score:2, Insightful)
Re:Ruby (Score:2, Informative)
Ruby rocks! (Score:2)
Lua (Score:2, Interesting)
XML and Lisp. (Score:3, Interesting)
If you have to work with XML, and you know some Scheme, I recommend translating it into Scheme form, via ssax [sourceforge.net]. It makes XML not quite such a pain in the arse.
Re:XML and Lisp. (Score:2)
This is not a troll: what's the equivalent in Lisp to XML namespaces, or attributes, or DTDs?
Re:XML and Lisp. (Score:2)
Here's an xml file:
<Forecasts TStamp='958082142'>
<TAF TStamp='958066200' LatLon='36.583, -121.850'
BId='724915' SName='KMRY, MONTEREY PENINSULA'>
<VALID TRange='958068000, 958154400'>111730Z 111818</VALID>
<PERIOD TRange='958068000, 958078800'>
<PREVAILING>31010KT P6SM FEW030</PREVAILING></PERIOD>
<PERIOD TRange='958078800, 958104000' Title='FM2100'>
<PREVAILING>29016KT P6SM FEW040</PREVAILING></PERIOD>
<PERIOD TRange='958104000, 958154400' Title='FM0400'>
<PREVAILING>29010KT P6SM SCT200</PREVAILING>
<VAR Title='BECMG 0708' TRange='958114800, 958118400'>
VRB05KT</VAR>
</PERIOD></TAF>
</Forecasts>
and here's the equivalent in sxml
(Forecasts (@ (TStamp "958082142"))
(TAF (@ (SName "KMRY, MONTEREY PENINSULA")(BId "724915")
(LatLon "36.583, -121.850")(TStamp "958066200"))
(VALID (@ (TRange "958068000, 958154400"))
"111730Z 111818")
(PERIOD (@ (TRange "958068000, 958078800"))
(PREVAILING "31010KT P6SM FEW030"))
(PERIOD (@ (Title "FM2100")
(TRange "958078800, 958104000"))
(PREVAILING "29016KT P6SM FEW040"))
(PERIOD (@ (Title "FM0400")
(TRange "958104000, 958154400"))
(PREVAILING "29010KT P6SM SCT200")
(VAR (@ (TRange "958114800, 958118400")
(Title "BECMG 0708"))
"VRB05KT"))))
Namespaces are dealt with as in
(c:part (*NAMESPACES* (c "http://www.cars.com/xml")))
Re:XML and Lisp. (Score:2)
OTOH, XML is being extended in all sorts of incompatible ways, so it may soon loose the advantages that it has held.
Is there a Docbook parser for a Scheme representation? Can it be used to, e.g., generate pdf, rtf, dvi, tex, etc. representations of the code? If so, then my objections are probably wrong. Otherwise, perhaps not. Perhaps if I wrote the Scheme representation, I would need to translate it into XML for useability. (Though in that case, I might prefer the older Docbook format.)
This is a serious question, as right now I'm having difficulty in getting Docbook working on a Win95 system, but I can easily get, e.g., MS Scheme (and others) working.
.
Re:XML and Lisp. (Score:5, Interesting)
((() ((hi) (there mr)) slashdot) guy () () ())
is one, for example. Lisp (and Scheme, a dialect of Lisp) both a) represents programs as a particular subset of s-expressions, which we'll call the set of s-programs (which makes sense, right? Every program is a piece of data, but not every piece of data is a program), and b) has facilities for very easily manipulating s-expressions -- they are Lisp's favorite datatype (the name LISP comes from "LISt Processor," in fact, and s-expressions are really just a way of writing down lists). The appeal of using Lisp dialects to process XML is based not on the fact that those programs are S-expressions, but that they process S-expressions easily -- and, as it turns out, XML expressions can be trivially converted to S-expressions -- let's say that there's a subset of s-expressions called X-expressions, and there's a trivial bidirectional mapping between X-expressions and XML expressions.
So, let's say you want to write a program that reads and/or writes XML to communicate with the world. You can just write your program as a normal old S-expression-manipulating program, like Lispers have been doing since 1958, and then right where you read in the data, you call some function that reads an XML datum instead, and right where you write it out, you call some function that writes an XML expression. Now you can still use all the XML-processing gizmos you already have, but you can also write your own XML-processing gizmos really easily. In fact, I've been involved for some time in a Web programming language project, and that's how we manipulate XHTML: we read in XHTML expressions, manipulate them with Scheme code that's easy to write because the XHTML-to-S-expression mapping is so thin, and then write out XHTML expressions to the web browser. None of the other XML-based tools in the chain (the web browser, XML editors you use to generate the web page templates, et cetera) need to know or care about the fact that my implementation is in Scheme.
The only smugness you hear from the Lisp people (and this is where the faux comparisons between Lisp and XML come in) stems from the fact that Lispers have been storing their data the way XML does, only more cleanly and with less typing, for years. Now XML comes along and everybody thinks it's going to usher in world peace and change the way we put our pants on. Well, dammit, Lispers were already putting their pants on a different way, thank you very much!
Re:XML and Lisp. (Score:2)
In particular, I am contemplating ways of getting Docbook, etc. to process correctly as a component of other programs. Easy on Linux, on Win95
And it was in this context that I ran across an argument claiming that Scheme expressions were better than XML.
I suppose the short form of my question would have been "Better for what?", but that seemed discourteous. And besides, if there were a way to do this in Scheme, I wanted to hear about it. So far, no answers.
Re:XML and Lisp. (Score:2)
Scheme is better for manipulation of XML.
Translating XML -> scheme is easy.
Manipulating s-expressions in scheme is powerful.
Translating s-expressions back to XML is easy.
It won't help your problem, though.
jeff
Re:XML and Lisp. (Score:2)
I don't think you correctly understand what the OP was saying. What he said, translated into CS-ese, was "The set of XML expressions is isomorphic to the set of Lisp s-expressions," a true statement. What you heard was, "The set of XML expressions is isomorphic to the set of Scheme programs," which is clearly not at all true -- as you point out, XML (and s-expressions) can be used to describe any sort of data, while valid Scheme programs are much more constrained and can only describe a particular set of computations (though the cool thing about Lisp is that programs are themselves represented as a subset of the very same sort of data that Lisp programs are best at manipulating, which is why Lisp macros are so incredibly more powerful than C macros). However, there's a trivial map from XML expressions to S-expressions, and Lisp loves S-expressions more than any other kind of data, which means that Lisp loves XML expressions almost as much.
Re:XML and Lisp. (Score:2)
Or better:
Dammit, Jim, I'm a publisher, not a data-modeller.
Re:XML and Lisp. (Score:2)
Probably. Good SGML processing tools understand DSSSL, which is, of all things, scheme. SGML used to just break down entirely to sexps, but that approach wasn't fast enough, given the state of the art of scheme at the time.
Re:XML and Lisp. (Score:2)
This is something I want. Not some crusty: Bah, its just sexps; but some working code.
slashdotted - heres the summary (Score:4, Redundant)
As I've indicated, the interest of the workshop was as much what was going on outside the talks as well; Dan and I got to meet a load of interesting and clever people, and it was challenging for us to discuss our ideas with them - especially since we didn't always see eye to eye with our academic counterparts. Sadly, few people seemed to have heard much about Ruby, something they will probably come to regret in time. Dan seemed to have picked up a few more interesting technical tips, such as a way to collect reference count loops without walking all of the objects in a heap. Oh, and we found that you should pour liquid nitrogen into containers first rather than trying to make ice cream by directly pouring it into a mix of milk and butter. And that the ice-cream so produced is exceptionally tasty.
But seriously, what did we learn? I think we learned that many problems that we're facing in terms of Perl implementation right now have already been thoroughly researched and dealt with as many as 30 years ago; but we also learned that if we want to get at this research, then we need to do a lot of digging. The academic community is good at solving tricky problems like threading, continuations, despatch and the like, but not very interested in working out all the implications. To bring an academic success to commercial fruition requires one, as Olin Shivers puts it, "to become Larry Wall for a year" - to take care of all the gritty implementation details, and that's not the sort of thing that gets a PhD.
So the impetus is on us as serious language implementors to take the time to look into and understand the current state of the art in VM research to avoid re-inventing the wheel. Conferences such as LL1, and the mailing list that has been established as a result of it, are a useful way for us to find out what's going on and exchange experience with the academic community, and I look forward intently to the next one!
Being Larry Wall (Score:2)
First John Malkovich, then Andrew Plotkin [wurb.com], now this. Aren't things getting a little out of hand?
-- MarkusQ
Best language for...? (Score:2, Funny)
"My language tastes better!"
Re:Best language for...? (Score:2)
Pascal is a lightweight language, even with extensions. Take a look at pax [rave.org] -- that's very possibly the smallest non-obfuscated functional language out there. Take it to another level: Befunge. False. Brainf*ck (the smallest Turing machine implementation I've ever seen). OISC, living proof that Subtract and Branch If Negative is all you need.
Though to be honest with you, when I think lightweight... HTML is precisely one example. The Unix Shell(s), with the possible exceptions of bash and ksh93. JavaScript, if not in execution, is lightweight in concept; Java is not, though it was intended that way. Scheme is lightweight; Common Lisp is not. Snobol probably was. *roff is, PostScript is not. Forth is; var'aq [geocities.com] is not. Lightweight implies two things to me: small overhead and specialized (though not necessarily limited) functionality.
So yeah, I don't much like the name of this conference.
Whats the "lighest" you can get? (Score:3, Funny)
To me, it would seem that the lighest I can come up with is:
So would that be usuable? A simple program such as:
VAR A
VAR B
INPUT A
INPUT B
C=A+B
PRINT C
GOTO 3
Can we get even more lightweight?
Re:Whats the "lighest" you can get? (Score:2, Funny)
This is a powerful, intuitive, interpreted, simple, no-point-oriented (NPO) helloworlding-language.
Re:Whats the "lighest" you can get? (Score:2, Funny)
> You sir, are mistaken. The only ability a programming language really needs is to output "Hello world" to the screen. Here's an interpreter (in Perl)...
That's nothing. I'm putting the finishing touches on my new processor design, and includes a native PHW opcode, no arguments.
Re:Whats the "lighest" you can get? (Score:2)
Re:Whats the "lighest" you can get? (Score:1, Insightful)
Think simple tasks that you would do with e.g a shell, or Perl, or even small C programs. How far can you strip a language down, yet still be able to acomplish those tasks?
That complicates things immensely. (Score:2)
This changes it from a "closed" language, with fully-specified behavior, complete within itself, to an open language which may be extended to arbitrary behavior with external modules, so it's not really a small language at all, just a small interface to a huge language. It either needs to be a compiled language, or to have compiled modules.
On the other hand, if you are willing to allow an environment designed as needed, to access all needed functionality in the form of one input and one output line, it returns to the sort of task a Turing machine can do.
Re:Whats the "lighest" you can get? (Score:5, Informative)
I'd say that the lambda calculus is more lightweight, and also easier to program in than your example:
exp
| exp1 exp2 (application)
| (fn x => exp) (function)
Basically the key is that you have higher-order functions (you can pass them around), and that's it. With this, it's relatively easy to code up booleans, integers, lists, recursive functions, trees, or practically anything. (If you wanted to do IO, you'd need some kind of primitive functions for interfacing with the machine.) Since everything is higher-order, it's easy to code these once and then pass them around. It's not as nice as a modern language, but it's nicer than a turing machine...
Actually, there is a simpler language that uses only two functions (!), but this one is pretty hard to program in directly.
Re:Whats the "lighest" you can get? (Score:2)
VAR B
INPUT A
INPUT B
C=A+B
PRINT C
GOTO 3
Can we get even more lightweight?
Sure. Why do you have to specifically declare variables? And why have a special syntax for "input a", why not just have input return it's own value?
(print (+ (read) (read)))
Heyyy, that looks familiar....:)
Re:Whats the "lighest" you can get? (Score:5, Informative)
If you want to experience the Turing tarpit (where anything is possible, but nothing is easy enough to actually do) firsthand, try the Brainfuck [muppetlabs.com] language, based closely on the turing machine. the language has 8 instructions, and only one of them (input) has any arguments beyond an implicit current location. The compiler is 240 bytes!
Re:Whats the "lighest" you can get? (Score:2)
Bf is a lot of fun, but not light in the sense of perl or scheme. Since the article didn't define light language, I'll give it a shot. Looking at their choice of languages, light appears to mean: easy to program, easy to understand, but powerful, interpreted languages. Bf is none of the above. About all you can say for it is Turing-complete!
BF's second advantage. (Score:3, Funny)
No, it also has the advantage that, if some poor sap goes to all the trouble to write a deamon in it, you get to smile and say "bfd!"
-- MarkusQ
P.S. It's also occasionally useful to drive a spike in "you can't do Y in language X" debates that have gotten out of hand.
Re:Whats the "lighest" you can get? (Score:3, Informative)
Yup, BF, turing machines & lambda calculus are "light" in the sense that they are tiny but theretically complete & of not much practical programming use. For instance, you can't do TCP sockets in Brainfuch no matter how hard you code, cos there's no way to get to the OS socket API.
The PHP that I use at work (and the Perl, Python, Ruby etc that other people use for similar tasks) are "light" in the sense of flexible, and quick to knock something together, inegrates well & comes with a great big heavy library full of usefull stuff.
Re:Whats the "lighest" you can get? (Score:2)
There's an x86 compiler for bf at ~170 bytes, but isn't the smallest bf compiler written in bf well over a gigabytes?
-- MarkusQ
Re:Whats the "lighest" you can get? (Score:1)
yes i know it's very usefull for some purposes, but ultimately it's a lightweighrt language.
How about this? (Score:2)
Odd numbers are true, evens are false, and control flow is through conditional return and conditional tail recursion. Comparisons and other arithmetic operations can easily be built up from addition and negation. Named variables can be created by making subroutines that return an address.
I call it, simply, rpol.
RPOL
#!/usr/bin/perl
%commands=();
%data=();
@mainstack=();
%builtins=(
'+'=>sub{push @mainstack, (pop(@mainstack)+pop(@mainstack))},
'neg'=>sub{$mainstack[-1]=-$mainstack[-1]},
'set'=>sub{$temp=pop @mainstack; $data{$temp}=pop @mainstack},
'get'=>sub{push @mainstack, $data{pop @mainstack}},
'in'=>sub{read STDIN,$temp,1; push @mainstack, unpack('c',$temp)},
'out'=>sub{print STDOUT (pack 'c',(pop @mainstack))},
'eof'=>sub{push(@mainstack, (eof STDIN)?1:0)},
'<?'=>sub{(pop(@mainstack)&1)?'repeat':0},
'ret?'=>sub{(pop(@mainstack)&1)?'return':0},
);
open(PROGFILE, "<$ARGV[0]") or die "Couldn't open program file.";
while(<PROGFILE>){
chomp;
if(/^#/){
#ignore comment
}elsif(/^:\s*(\S+)\s+(.+?)\s*$/){
$command=$1;
$commands{$command}=[split
}elsif(/^\s*(\S.*?)\s*$/){
rpol_exec(split(/\s+/, $1));
}
}
sub rpol_exec{
REPEAT: ;
for(@_){
if(exists $commands{$_}){
rpol_exec(@{$commands{$_}});
}elsif(exists $builtins{$_}){
$temp=$builtins{$_}->();
if($temp eq 'repeat'){ goto REPEAT }
if($temp eq 'return'){ return }
}elsif(/^(\d+)$/){
push @mainstack, $1;
}else{
print STDERR "Unknown token: '$_'\n";
exit(1);
}
}
}
#END FILE
For a sample, I wrote a program that swaps every two bytes of input, then writes them to output.
Sample Code (test.rpol):
#unused cat utility
:cat in out eof not <?
#unused newline function
:nl 10 out
#main program
:not 1 +
:swap 1 set 2 set 2 get 1 get
:cleanup eof not ret? out
:swapcat in cleanup eof ret? in swap out out eof not <?
swapcat
#END FILE
Re:Whats the "lighest" you can get? (Score:3, Interesting)
Oh my, yes. All you need to compute is three operations (and another couple to do i/o). Check out unlambda [eleves.ens.fr]. Lighter than brainfuck, probably even more maddening, since it doesn't have state like a turing machine does.
Change the i/o ops to read and write arbitrary memory locations, and you could write an operating system in unlambda (same goes for any other of these toy languages)
Coolest part (Score:2)
I think that's the coolest part of probably the whole conference. If perl/Parrot/Python can manage to take the best of both the academic and the practical worlds, they'll be unstoppable. Heck, it might even be a first! The two seem allergic to talking to each other, as if they'll become contaiminated, rather then treating each other as a chance to learn, grow, and test.
Have you tried UF ? (Score:1)
Re:Have you tried UF ? (Score:2)
Just what we need more of (Score:2, Redundant)
I get very tired of the "X is better than Y" fights. They're pointless, and if this collection of language pros can avoid it, so can we. The better language is the one that gets the job done best for you, period.
Rather than clinging to our cliques, getting together with users and creators of other languages is beneficial to everyone. Hybrid vigour, if you like.
It's this sort of cooperation the open source movement in particular should embrace, not petty squabbles over syntax preferences. In the end, everyone should win.
Re:Just what we need more of (Score:1)
Re:Just what we need more of (Score:1)
that's why the event didn't take place on slashdot (Score:2, Funny)
Liquid Nitrogen Ice Cream (Score:1, Interesting)
Oh, and we found that you should pour liquid nitrogen into containers first rather than trying to make ice cream by directly pouring it into a mix of milk and butter. And that the ice-cream so produced is exceptionally tasty.
Years ago I read an article about a guy from Jackson Hole, Wyoming who made gourmet ice cream. He had determined that the two things that separated good tasting ice creams from the rest were:
1. Fat. Ice cream needs lots of fat.
2. Size of the ice crystals. The water in ice cream can be frozen in big crystals or little ones. If you freeze it slowly, you get big crystals. Freezing quickly leads to small crystals. Small crystals == better ice cream.
So this guy found that he could make the smallest crystals by pouring everything into a big bowl with some liquid nitrogen and stiring it really quickly. This was after trying several different methods of freezing the ice cream, none of which were fast enough for him.
He said that a good test of ice cream was whether it floated in water. Good ice cream should be dense enough to sink. I guess this is due to the high fat content. Of course once you put it in water, it is no longer good ice cream, right?
Re:Liquid Nitrogen Ice Cream (Score:1, Offtopic)
Fat is *less* dense than water (it floats).
Maybe the right kind of cold fat sinks (as opposed to the water which expands upon freezing)?
Anyway, sinking ice cream is strange.
Re:Liquid Nitrogen Ice Cream (Score:2, Offtopic)
In some places there's a limit on how much air can be in ice cream: 50%. There's no lower limit, but at 0% you've just got a block of ice. So there's a de facto lower limit.
Something like Ben and Jerry's has much less air. That's why it's less dense, and that's why sinking ice cream can be a measure of quality.
Re:Liquid Nitrogen Ice Cream (Score:2)
Hooray for lemming moderation!
PS, this guy is right about the air.
Re:Liquid Nitrogen Ice Cream (Score:1)
But anyway if you have lots of fat in the mix, it should sink when it gets cold because as a whole it will be more dense than the water surrounding it.
Re:Liquid Nitrogen Ice Cream (Score:1, Offtopic)
Re:Liquid Nitrogen Ice Cream (Score:2, Funny)
Re:Liquid Nitrogen Ice Cream (Score:1)
once upon a time there was a time when I and my gf didn't know eachother very well, well, basically we were dating. So, to make long story short, once we bought a packet of cheapo ice-cream (1kg, banana-chocolate). Well, at that night we couldn't eat it till the end.. so we desided to dump it. Into toilet.
Guess what? It didn't go down. That brick'o'**hit floated there. I read a short prayer upån it's soul, and we went to bed. In the morning it was all melted and went down beautifully.
And the moral of the story?
you tell me some.
Re:Liquid Nitrogen Ice Cream (Score:2, Offtopic)
We sometimes do this as a party trick at midwest SF cons. Take your basic ice cream recipe in a big bowl, then have one person slowly pour LN2 from the dewar while the other one stirs maddly. I'm not sure why the article recommends pouring the nitrogen into containers first.
Liquid oxygen works wonderfully as well. Last summer in Michigan we made LOX ice cream with freshly-picked thimbleberries. (And no, it doesn't burn! Not even when you put a blowtorch to it...) In a pinch you can even use dry ice. Have someone rub a block of dry ice on a cheese grater over the bowl. This method tends to leave some residual carbonation in the ice cream. Bring along root beer extract for flavor!
Other fun cryogenic tricks -- Everclear (198 proof grain alcohol) will freeze at liquid nitrogen temperatures. Small pieces chipped off evaporate marvelously on the tongue. An inverted scotch-on-the-rocks can be made by freezing scotch in an ice-cube tray and adding the cubes to a little water or soda. Winecicles are interesting, too, but beware the tongue-and-flagpole effect when you lick them!
Re:Liquid Nitrogen Ice Cream (Score:3, Interesting)
You can check out my video [thesync.com] about making ice cream with liquid nitrogen. I'm a bit afraid about the butter part, generally LN2 ice cream is made with milk and heavy cream, plus sugar and vanilla. I'll have to try the pouring mix into LN2 rather than pour LN2 into mix.
What about INTERCAL? (Score:4, Funny)
I sure hope next year's LL2 addresses this issue.
Re:What about INTERCAL? (Score:1)
As has been said elsewhere, these kind of conferences are really focused on trying to find common ground so knowledge can be spread between specialists.
ESR has an unfortunate reputation for stridency, this could have been a reason for his absence.
INSERT SIG HERE
Re:What about INTERCAL? (Score:2)
=]
Re:What about INTERCAL? (Score:2, Insightful)
Re:What about INTERCAL? (Score:2, Funny)
Re:What about INTERCAL? (Score:2, Informative)
ll1.mit.edu [mit.edu]
Actually we did invite ESR, and we actually scheduled the workshop around his constraints. However, there were a few misunderstandings, and four days before the workshop we got mail to the effect that he wasn't coming.
Maybe next time.
didn't really expect a flamefest.. (Score:4, Redundant)
cURL adresses the niche of Flash??? (Score:1)
I wonder what this guy is talking about.
Re:cURL adresses the niche of Flash??? (Score:2, Informative)
Not a flame fest, but a bit of tension (Score:5, Insightful)
There was a bit of a superior attitude from some of the academics, who feel that languages like Perl and Python reinvent the wheel and neglect the body of academic research by coming up with suboptimal solutions to PL problems that have long since been "solved" in the PL literature. Maybe "frustrated" is a better word than "superior." While I can totally appreciate their point of view, I found myself cringing in embarrassment once or twice when a harangue by one of the academics went a little overboard. There has already been one post on the LL1 mailing list that I feel crossed the line.
The discussion came to a bit of head during the (very interesting) "Worse Is Better" panel (based loosely on the writings of Richard Gabriel [dreamsongs.com]), which centered on the question of why the most popular languages aren't the "best" ones.
Like I said, though, it was mostly very congenial. Ultimately, I think each camp took something away from the encounter: both new-found implementation techniques, and a greater knowledge of and interest in the other community. There are some practical issues that the Perl/Python guys have to deal with (e.g., interfacing with legacy languages like C) that aren't really addressed by academics, and I think it was great that these issues were brought to light.
The LL1 website, if anyone is interested, is ll1.mit.edu [mit.edu].
Academia to Hackers (Score:5, Interesting)
I think we learned that many problems that we're facing in terms of Perl implementation right now have already been thoroughly researched and dealt with as many as 30 years ago; but we also learned that if we want to get at this research, then we need to do a lot of digging. The academic community is good at solving tricky problems ... but not very interested in working out all the implications.
This is the best paragraph in the article. Here's what makes me sad:
Slashdot-type hackers have an amazing ability to get things done. They can really come up with a working product faster than anyone.
BUT, slashdot-type hackers have a tendency to implement olddd ideas, and also frequently to make well-understood mistakes. It is true that we are on the cutting edge of implementing internet protocols and maybe window managers, but in other areas we are implementing 30 year-old ideas still. (OS design and programming languages come to mind especially.)
WHO, if not the hackers, will embrace this stuff? They are the only ones that are supposed look beyond the hype and marketing and status quo to evaluate things based on technical merits, and to create implementations of new ideas.
I know only the OS design that I learned in my undergraduate course. But that is enough to know that the design of the kernel is very conservative! Where are capabilities? Where is fine-grained access control? Does anybody *really* think that their internet daemons should run as *root* just so they can open up a port with a low number? (I know there are plenty of workarounds...) I am sure that there are dozens of great ideas in OS design from the last 20 years that would be totally appropriate for a hacker's kernel.
I know a bit more about PL design. Being in academia pollutes the mind, I know, but I am sure that almost all I see in the slashdot PL community is reworking of old, mediocre ideas. Who in the world will use and develop new programming languages if not hackers?
(So, the PL fanatic in me wants to point out caml [inria.fr], which, even though it is not my personal favorite, I think could become really popular with slashdot-style hackers. It is really fast -- probably the fastest, it is hacker buzzword-compliant (it has "objects"), and yet it has taken many great ideas from academia and put them in a really usable, accessible form. Try it if you are in for a taste of something different!)
Anyway, just trying to say that if you are tempted to go hack up your own programming language, please at least don't assume that Perl is the state of the art because it is the most popular scripting language or something. Take a class, read a book, and check out some of the weirder languages coming out of academia first. Hackers are how the revolution happens...
Re:Academia to Hackers (Score:3, Insightful)
(So, the PL fanatic in me wants to point out caml [inria.fr], which, even though it is not my personal favorite, I think could become really popular with slashdot-style hackers.
Of course ML languages are 20 years old, and Caml was developed *before* Perl and Python. So it isn't necessarily that the newer ideas are better, just that lots of good ideas tend to get lost to history for various reasons.
Definition time (Score:2)
This is much more difficult with languages where the type of an object is expected to be know when you write the program. CaML seems to be of this latter class. (So are Java, Eiffel, Ada and C++.)
Notice that the second group of languages tends to be faster, but less flexible. This appears to be an inherent trade-off (though Java paid extra by having an interpreter for security and portability reasons).
Type inference to the rescue! (Score:5, Insightful)
Caml does full type inference for you, so that you have to write fewer types than you would in C or java.
In fact, in Caml you really only have to write types when you write down an interface to a module -- and this is exactly what languages without sophisticated type systems lack. It is very difficult to write precisely what your interface is without writing down types, and if the type language is poor (ie, Java, or worse: perl) then writing interfaces becomes more an exercise in documentation and finger-crossing.
(Personally, I also find that automatic type checking is very conducive to writing maintainable programs. It keeps me from making the gross hacks that are so tempting in perl. Typically it doesn't make my programs any longer or more difficult to write, since ML-family languages have lots of features to capture the common idioms that require this "flexibility" in perl et al.)
Careful not to make too many generalizations. I think Caml is much nicer than other typed languages you mention.
Re:Type inference to the rescue! (Score:2)
Re:Academia to Hackers (Score:4, Interesting)
> lanugage implementation by wasting your time
> reading the many years of esoteric research
> published on the subject, especially since real
> languages frequently have to do things that don't
> make for terribly fascinating research.
Of course this is true, but I am not asking anyone to waste years on *esoteric* research. I am merely proposing that people designing a new OS or programming language look at the current state of the art; to at least know about and consider seriously the *known good ideas* in academia.
Here are some glaring examples of features that ML (for instance) has that are *damn useful*, totally not esoteric, yet typically don't even find their way to the table in the design of a slashdot programming language:
- parametric polymorphism. (No casts!! Java is slowly getting this, finally; they call it "generics").
- datatypes and pattern matching. (Makes processing recursive data structure like lists and parse trees beautifully simple!)
There are many other things I can think of, which have varying degrees of obscurity, but I think these two are firmly on the useful side.
My point is that people somehow bizarrely confuse "popular" with "state of the art". Like, in a slashdot discussion about programming languages, I invariably hear, "X is better than Y because X is object-oriented," as if object-oriented programming is the pinnacle of PL design. It's 30 years old! (Even the ideas I propose above are about that old!) It is worth looking at more recent ideas, and those aren't typically to be found in mainstream programming languages.
Re:Academia to Hackers (Score:2, Insightful)
Re:Academia to Hackers (Score:2)
Actually, I think this is flat wrong. Can you think of an example? Just about every aspect that current "real" languages implement has been researched to death - and then some.
You don't get to the cutting edge of programming lanugage implementation by wasting your time reading the many years of esoteric research published on the subject
If you're a professional language designer - and I know a few - and you don't do this, you're pretty much doomed to failure, because someone who knows what they're doing is going to come along and eat your lunch. Java is a case in point - one of the designers was Guy Steele, co-inventor of Scheme. He knew what he was doing, and although Java is chock-full of interesting and questionable compromises, it still blows away many other "real" languages.
Intelligent people don't argue about language. (Score:5, Insightful)
Not surprising. The only people who get into flame-fests about programming language choice are insecure newbies. It comes down to the same reason kids argue about whose game system is better: they got one for Christmas and feel compelled to defend their choice, because they can't afford another. Once you know a sizable number of computer languages--especially different styles of language--then you no longer feel a need to be so petty. Different languages have different strengths.
It all boils down to bit flipping 1's and 0's (Score:2)
Certainly if the target is to be an optimized sequence of 0's and 1's then is it not the translation mechanics responsible for getting it there, and from whatever vocabulary(s) and syntax(s) used?
This is where I believe genuine computer science and software development research got seriously distracted by the carrot of money. And as it was mentioned in the article regarding not doing it right in a tradeoff of getting it out the door, getting back to genuine computer science may be difficult to do! But it also seems to be an ongoing and growing problem in genuine Software Engineering. The latest version of a need to solve the software crisis? [ibm.com]
Note that IBM presents a white collar high dollar I/T solution direction intent, but without any identification of the base functionality mechanics of translation.Read Written Comment #4 [uspto.gov] after reading the "Manifesto" at the above IBM link.
With all this in mind, what are all these "Lightweight Languages", but examples of how many ways you can create a custom vocabulary, syntax and translator that outputs 0's and 1's not always in the optimum sequence?
.
.
Re:It all boils down to bit flipping 1's and 0's (Score:2)
The fact that every language is an application of basic computing concepts is no more help than telling a research pathologist that all viruses are made of matter. It's simply not science to keep pointing out things we already know.
Re:It all boils down to bit flipping 1's and 0's (Score:2)
Encoding to a selected base is also a translation process, but not one that is directly compatable with the hardware without further translation (that you don't see), unless it is to the base compatable to the hardware.
Hardware is made of matter and apparently a refresher course wouldn't hurt you. Back to basics is always a good thing when you have forgotten them or have gone astray to the point of failing to solve problems like the software crisis.
.
No, actually, it doesn't. (Score:2)
You completely miss the point. If you want to address your so-called software crisis - which is only a crisis when you have unrealistic expectations, based on ignorance or denial of the issues being faced - then you need to provide humans with languages that allow them to express programs in powerful ways, that make programming easier and more reliable. Focusing on the 1's and 0's completely misses the fact that the challenges lie at the level at which the humans controlling the machines operate.
Academia has produced many innovations in these areas. All modern mainstream languages can benefit from these "new" language technologies (some of which are actually decades old). The LL1 workshop was about communicating between those who have developed sophisticated and powerful ways of dealing with language problems, and those who have a record of having implemented languages that are popular with humans - languages that are used not because of mandate from on high, but because they're perceived as easy to use and also powerful, and thus desirable to use.
An additional interesting element here is that the mainstream "Lightweight Lanugages", like Perl and Python, have a better track record than the big commercial languages of incorporating these ideas - witness the fact that Perl and Python support advanced capabilities such as closures and continuations, whereas other recent languages, like Java, have limited to nonexistent support for such things.
A collaboration between the authors of mainstream lightweight languages, and academic language researchers, opens the possibility to accelerate language development in a sorely needed way - instead of innovations taking literally decades to make their way from academia to the mainstream (e.g. the way object-orientation did), this lead time could be reduced to mere years.
In addition, via lightweight languages, these features would be delivered in a form more palatable to the audience consuming them. Lightweight languages tend to recognize the pragmatic needs of their users, as opposed to imposing restrictions based on aesthetic constraints such as "elegance".
In summary, the plethora of lightweight languages is a simple reflection of a dynamic and fast-evolving ecosystem, an absolute requirement for further progress in the extremely complex endeavour of humans programming machines.
Re:No, actually, it doesn't. (Score:2)
Your failure to know this seems to be consistant with failing to understand that it's not "language" and "syntax" that's the real issue, but rather getting the translation mechanics figured out.
This way it really won't matter what vocabulary (language) set and syntax you use. But rather opens the door up for combining languages as well as extending and creating them. Allowing you to use the better vocabulary and syntax for what you are expressing.
Translation simply takes whatever you have written and converts it into the optimum bit sequence for the machine to deal with.
Or for that matter, Translation from whatever form to whatever target form that is defined. Like Human to human Translations (i.e. English voice input to german audio/spoken output.)
The translation mechanics are going to be the same.
There is nothing wrong with defining new concepts, such as language does. But in having the science of translaion mechanics figured out, it will enable new concepts to applied alot sooner and probably alot easier too.
.
Re:No, actually, it doesn't. (Score:2)
No, but you used it as though you believed it. In my experience, now that it's no longer the '60s or '70s, the phrase gets thrown around by people who know little about software development, usually to sell a product or idea.
Translation simply takes whatever you have written and converts it into the optimum bit sequence for the machine to deal with.
Or for that matter, Translation from whatever form to whatever target form that is defined. Like Human to human Translations (i.e. English voice input to german audio/spoken output.)
The translation mechanics are going to be the same.
You might want to read Joel Spolsky's piece on "Architecture Astronauts [joelonsoftware.com]". Suggesting that all translation from any language to any other language involves the same translation mechanics, to me simply indicates that you've never actually done any work or studying in this field, and are indulging in armchair speculation. Any abstraction at that level will be essentially useless to the task of actually translating the material in question.
Take a look at what's involved in rewriting code in a functional language to a compiled form - an area that's enjoyed a lot of academic attention - and compare that to tools which translate human languages. The commonality there is minimal, at the level of stratospheric overviews like "get input; translate input; produce output". Architecture astronauting indeed! Dare I point out that all computation follows this pattern - "translating" a problem into a solution - so once your universal translation mechanics have been developed, we'll never have to write another line of code? "O great translation mechanics, what is the answer to the question of life, the universe, and everything?" Sounds good, let me know when you're done!
Re:Your web pages (Score:2)
matter what I say or write.
Also Slash dot is a limited message board, in that the board is not
designed to support ongoing threads but rather to obtain pretty much
initial feedback on articles posted.
I'm sure that if you seek you shall find any article you want, typically
what is supportive of your own perspective.
Your perspective is one that does not recognize the existance of the
"software crisis" and as such there certainly can't exist a solution for a
problem that doesn't exist. And to support your perspective you will of
course ignore anything suggesting otherwise. In this case it means that
you also have to ignore the fact the term "software crisis" came from the
industry and still being used today. In essence you are saying the
computer industry lies. Which then is a problem to solve. As a consumer
and potential producer, I'm tired of being lied to by the computer
industry, but I'm not going to ignore the potential of the computing tool
I call an Abstraction manipulation machine.
You can go from the extream of astro-architexture to the opposite of not
seeing the forest for the bark of the particular tree you are looking at,
but the only thing that proves is that any perspective can be presented
with bias of against or for, and even applied in ways proving either is
the fact. It's rather inherent in the ability of an abstraction machine to
be able do so, follow what a human directs it to do.
But in all of this, you cannot avoid using the nine action constants.
You can of course chose to not be aware of this. But so long as you are
doing this ignoring, you will not be able to progam complexity of the level
of what would be required of just the programming side of a High Quality
Virtual Reality holodeck, and certainly not at the direction of a small
child.
So as long as you are happy and can function productively within the
ceiling of constraints as is the current approach to programming, there is
no problem such as the software crisis. But for those, like myself, who
see how to go way beyond current ceilings of constraints, those who resist
and either prevent or suppress such advancements to only allow them to
happen under their control, there is in fact elitism. And as a result of
elitism there is inherently failure due to incomplete satification of
client and user, especially for those clients and users who want to do it
for themselves or add to what is or customise, but are not allowed to due
to lack of the availability of down to earth tools of abstraction
manipulation, automation mechaincs.
There is no majic algorythim, only the human programming of dynamic
automation, which includes automation of programming, of an endless amount
of things and actions upon those things in order to translate or convert
input to output. And as such is the endless amount, compatability and
integratability become issues to resolve.
The common conversion point is the carrier signal, the gears and bearing,
the action constants. What is abstraction manipulation, translations
mechanics. It's not a language, but an action set.
There is no rule of knowledge or of putting things together that says I
need to study all written works ever existing in order to be smart enough
to be able to put things together. But if it will make you happy, I'll use
the nine action constants to add your suggestions to what is becomming an
endless list of what others think I need to do before I'm somehow qualified
enough to become aware enough of what actions I use with consistancy, to do
anything I may chose to do.
However, Being that I do realize these actions, I can certainly see how
such awareness of these actions, and control of, can help me create "word
= definition" and that with enough words, I can create new words with new
meanings, etc.
Really, just because something has become second nature in ourselves
doesn't mean we are always aware of our use of such second nature things,
though when they are pointer out we may think nothing important of them.
But the thing is, computers don't know "second nature" but only what we
program them to do. And it seems that although the computer software
industry can automate anything no matter how complex it is, the one thing
they consistantly seem to fail at automating is the field of programming,
and to the extent of failing the ability of the typical user making use
of such automations.
Human language converted to the language a machine understands at the
physical hardware level is an act of transation that uses gears and
bearings of translations to do so. Which are the natural laws of the physical
phenomenon of abstraction creation and manipulation.
Why I have taken the time out to try and get you past all the things you
do not want to pass? Because someone else will see this here on an several
day old slash dot article reader response list?
There is a link near the bottom of my new home page regarding an arguement
for a python installed system.....
.
Re:Your web pages (Score:2)
The no silver bullet article is not currently available through my ISP but like most other links to articles you have posted, I'm aware of them and their content.
Did I not already say that there is no majic algorythim? but for what you quoted, there is no "invention" is true, what is the physical phenomenon of the natural laws of how we use abstractions is three for three not something you can claim is an invention.
You mentioned plan9 and I'm aware of the plumber but realize it's rather limited, but a concept in the right direction.
As far as solving problems goes, there is a genuine class of problems to resolves and that is a task for genuine software engineering, but it should also be a task that once solved then made available for everyone to use and make use of in an automated manner. Such an example might be the business-form based applicaion you mentioned. As it seems to be figured out well enough to be then automanted in such a manner that the user can design and creae their specific applications of these concepts.
I really don't need to spend my life inputing into my brain all the "stuff" the computer industry comes up with. Such expectations from the computer industry for anyone to do this in their creative and productive use of computers is absolutely foolish and arrogant, if not just plain immature and ignorant. For who in all these other industries and uses of computers has the time to do what even those in the computer industry have a hard time keeping up with?
I don't! But what I do have the time for is to make use of consisant physics and nature base (intuitive - as far as intuitive goes) translation tools that allow me to put things together in the 2 deminsional spaced of the abstraction world, where the details are accessible for the benefit of fixing and/or customizing but the use of which is automated so that those who don't have the time to create the details or even know what the details are, can still use them in their own creations via automated use, "auto-coding."
So while you are arguing that I need to know all about creating a car, from it's first invention to design and recent breakhroughs to manufacturing all the parts including the mining of metal and the development of rubber... all the way out to the junk yard.
The fact is, I don't need to know any of this in order to learn to drive the car and make creative producive use of it.
Granted a computer is a different vehicle, it's far more versatile and can be shaped/programmed in very specific ways. And just as cars and other road vehicles have an established way of using them, with some variation, the tool of abstracion manipulation mechanics needs one, and as it turns out it needs to be something that's natural logic of abstraction creation and use and as such inherently not patentable. And that is what the VIC is.
There are over 3000 programming languages that have been developed over the history of electronic computers. That's far more then the documented human languages in the recorded history of man. But in all of these programming lanuguages, the list of programming concepts is by far, a great deal smaller. So from the user making use of auto-coding to the binary based hardware, there is translation mechanics which are the same physical and natural laws of abstraction manipulation - of which the user is capable of making productive use of, in being creative with concepts others have developed.
Re:Your web pages (Score:2)
It is. Maybe having the objective of first identifying the action constants had something to do with this outcome?
I think you'd have better luck starting from scratch and comming up with a different way to preceive gravity in order to solve problems with the theory.
There is just something about me changing what it is I'm interfacing with (AI) to get to your last post (ID) in the thread (PK) and read it (OI) on my monitor (IP) and access the links you gave (IQ) before I can respond thru the web editor (OP) within the limit (KE) of space to write and time I have to spend on a response. All of which I do in a sequencial (SF) manner.
Although the human application of these nine action constants goes much further in the example than given, the point is, selection of where they are identified for control is the key. Computer are less complex than we are and as such we do have choice as to where to identify the action constants in using them thru computer to automate.
In any event, they are action constants, not abstractions. They are what you use to process abstractions. And although I have assigned to each of them a two letter absraction symbol, this symbol is itself redefinable so that there is not conflict within the scope of a given use. And they are redifinable on the fly.
You cannot solve a general langage problem by inventing another language. You can however understand the underlying action constant that is used in all languages and as such be able to change which language you are using in order to avoid the limitations of given language. By going to a language that has not the same limitations.
Sorta like using a saw to cut the wood but then a hammer to nail it, changing the tool as needed. And language is a tool.
Re:Your web pages (Score:2)
That's kind of my point. There are multiple theories of gravity. That's why I suggested philosophy of science as a worthwhile field of study - it might give you some perspective.
Like physical theories, when it comes to software models, most models are essentially really "views" when you get down to it, i.e. they inherently embody a particular way of looking at the problem in question. A comprehensive model can certainly be widely applied, and that's what you're claiming for your model. But if you think that's the only way of looking at it, I can safely say that you're wrong.
Re:Your web pages (Score:2)
Sure you can view chemicals as majic dust but how far is that really gone to get someone?
With software, and no matter how you "view it", it's still going to be a matter of manipulating abstractions in order to control the physical state of the machine, be the machine binary, trinary or whatever. In doing this, there is an identifiable physical set of actions that is used regardless of how you view and define the tools of "computer languages".
The whole point of creating computer languages was not for the sake of creating computer languages but in order to control a massive count of switches that exist in reality. A count way beyond what any human can manually handle. The solution is in creating word=definitions to define a given state or change in state. Like the most simple act in math, addition. Only here the available set of usable abstractions is only limited by imagination and real hardware limitations (the potential on both sides of the equation). At the very core, it's bit flipping but on such a massive scale that the only way to do it is to define and used such abstractions.
word = definition -the most fundamental act of programming.
But the action set remains the same and regardless of what you call it, how you "view it" or how you use it.
The VIC is an identification of these, defined in terms of computer functionality (inherently limited to the scope of the physical hardware) and in a configuration that supports maximum possible versatility. Even the name "Virtual Interaction Configuration" was taken from the field of physics, not some invention of some computer industry marketing hot hype of abstract words. Look up "Virtual Interaction" in the book "Tao of Physics" (which you should be able to find in any decent book store - the original source is probably alot harder to find but mentioned in Tao of physics).
So we have a mass of physical switches to control and use the tool of "word=definition" in making it easier to do. And as the mass count of "word=definition" grows we then create languages and syntax to help us better deal with the count while adding versatility. But at this point it's not a matter of dealing with a mass problem of physical based switches but rather a mass problem of absractions.
And it is here where recursion should be identified. Recursion in using the same solution direction and physical action set to solve the "mass" problem.
You can go to higher and higher level abstractions, languages, but every time you run into a "mass" problem, the same action set will be used to solve it. And that proves the action set valid. And regardless of what level you use the action set on, be it in creating the hardware (i.e. transistor = layer configuration of chemicals at the atomic charge level, cd player = a hardware configuration, etc.), hardware to software, or abstraction level of software.
So after creating over 3000 programming languages, you'd think the science of software would have figured this out already.
Oh wait! There is no science of software yet. And that is because the dragon of money has blinded and made witches and warlocks to create majic potents. Potents that give them power over the ignorant. If you sail out there, you'll fall off the edge of the world if you make it past the sea dragons....
Isn't that what you keep trying to tell me?
You know, astro-architecture....etc..
.
Re:Your web pages (Score:2)
Isn't that what you keep trying to tell me?
No, what I'm trying to tell you is that there is a science of software, which apparently you haven't investigated at all. You're focusing on the shortcomings of a relatively immature "industry" without looking at what the scientists are doing, much of which will drive what happens over the next 5, 10, and 50 years, as it has done to date.
Industry often tries to ignore the science, for reasons of its own; science usually wins in the end, because it's hard to ignore reality for long periods of time. What I'm trying to tell you, is that you shouldn't ignore the science.
Obviously... (Score:4, Interesting)
(or any other Forth derivate, such as BigForth [jwdt.com] - for Linux and Windows - which include a breathtaking GUI RAD : Minos)...
Here's a small ColorForth program: This consists of an IDE disk driver [colorforth.com].
Parrot - new Lingua Franca? (Score:2, Insightful)
I doubt this is the first effort to create a popular open VM, but it seems to be one of the most heavily promoted. Hopefully we will see Parrot-based languages springing up everywhere, and perhaps even ports of existing languages.
Lightweight Languages? (Score:1)
Definition (Score:3, Informative)
Best quote from the article (Score:2, Interesting)
will probably come to regret in time."
I make a prediction... (Score:5, Funny)
Paul Graham rounded off the talks by talking about his new dialect of Lisp, which he called Arc. Arc is designed to be a language for "good programmers" only, and gets away without taking the shortcuts and safeguards that you need if you're trying to be accessible.
I predict that someone will later come out with a new and improved version of this language which is backward compatible, and runs 10 times faster. That language will, of course, be called Zip.
Re:I make a prediction... (Score:2, Insightful)
I'm really excited about this language. They're going to give an honest shot at making a lisp that will have more general appeal (read the part about onions: they're taking the "onions" out of common lisp), yet still maintain the raw power of macros. It should be very exciting.
But how does Arc get implemented? (Score:2)
That way the sets of "library stuff" already existant could be used, and they're not left re-implementing everything from the compiler on down...
Relational Programming (Score:2)
Next time include the requirements (Score:2, Interesting)
Commercial information tends to be persistent, not transitory. A good language should work directly with stored data.
Processes in organizations are long-lived and distributed, whereas typical programming languages just deal with transient threads etc. (outside workflow systems such as WebLogic Integration).
Programs represent rules, algorithms and other forms of knowledge that end-users will want to add to (e.g. a discount formula). Not only should the environment allow run-time modification and extension, it should also support representations and syntaxes accessible by non-programmers.
Every action has a principal actor associated with it, and typical commercial environments need to record who it was for auditing and access control purposes. If a programming language has no concept of Principal, one has to be stuck awkwardly on the side (e.g. Java EJB isCallerInRole).
Transactions are a very common programming model. At the very least, there should be support for creating and propagating transaction IDs, restarting procedures etc.
What else? Run-time metrics, versioning, SQL-style set predicates... well, you get the idea. People have to wake up to the fact that there is still a huge disconnect here.
(Amazing to think that Java gave Microsoft some ideas and a wide-open goal, and they came up with C#).
rebol (Score:2, Interesting)
But when I checked out rebol that was mentioned in the article I found it was in fact as good as it first seemed, maybe better.
Within an hour of first hearing about rebol I had written a gui program that displayed the live picture of the Tokyo Tower on the net and updated it every 60s.
When I first wrote this program, it was as a learning experience for c#- and it took a hell of a lot longer to write and the code is much longer.
So maybe for me rebol is the ultimate lightweight language!
Little languages (Score:2, Informative)
I wrote a paper about it. Although it's true I am a pointy-headed academic, I do occasionally hack a few lines of code, and I when I've solved a problem over in the research world whose solution would be useful to hackers, I try very hard to write papers that are readable by your generic hacker.
If you go here http://www.cc.gatech.edu/~shivers/citations.html [gatech.edu] you'll see a list of papers I've written. These are the ones that people in the perl/scripting/lightweight-languages community might find interesting:
#2 has an opening flame about a problem in the open-source community I call "the 80% solution" problem. The regex notation it describes is now standard with scsh.
#4 & #6 will be of interest to VM designers.
#8 is, ahh, somewhat more well known for its non-technical content. But I'm on a new set of meds now, and doing a lot better, really.
-Olin
Re:Reinventing the wheel and money (Score:2)
Bingo!! Give the man a C-gar!
The academic stuff that I've seen, in addition to what AC says above:
Note to academics:
CiteSeer (Score:2)
The research is there for the taking!! (Score:4, Interesting)
You couldn't be further from the truth. Someone's already mentioned CiteSeer. I've read and downloaded hundreds of papers from there. Google is great for tracking down papers, too.
Another nice resource is library.readscheme.org [readscheme.org]. It's Scheme-specific, but Scheme is the root of much research about programming languages and the underlying concepts - it pretty much spawned the field of functional programming.
The biggest barrier to entry for this sort of stuff is your own existing knowledge. There's no pill you can take to pick it all up overnight. You have to work hard at it. This is the real reason to go to a real universities - not to learn how to program in the language du jour, but to learn about what some very smart people have already figured out over decades, centuries, millenia, and to learn how to think like those people.
There aren't many shortcuts here. It doesn't help to be told that there's a simple solution to the problem you're working on, if it involves a network of deep concepts you've never heard of and are totally unfamiliar with. To take some examples from functional programming: closures, continuations, continuation passing style, fold operators, polymorphic type inference... If you don't know what all those things mean, and can't use them in your code, you're unnecessarily limiting yourself and denying yourself leverage that can help get big, complicated things done more quickly, with less fuss.
One way to start out is to learn some advanced languages. Scheme is a good starting point because there's so much tutorial literature for it. You can pick up the computer science concepts as you go along. Read Structure and Interpretation of Computer Programs [mit.edu] (SICP) and How to Design Programs [rice.edu] (HTDP). Join the ACM [acm.org]. There's so much stuff out there, go look for it, and apply yourself!