Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Faster Chips Are Leaving Programmers in Their Dust 573

mlimber writes "The New York Times is running a story about multicore computing and the efforts of Microsoft et al. to try to switch to the new paradigm: "The challenges [of parallel programming] have not dented the enthusiasm for the potential of the new parallel chips at Microsoft, where executives are betting that the arrival of manycore chips — processors with more than eight cores, possible as soon as 2010 — will transform the world of personal computing.... Engineers and computer scientists acknowledge that despite advances in recent decades, the computer industry is still lagging in its ability to write parallel programs." It mirrors what C++ guru and now Microsoft architect Herb Sutter has been saying in articles such as his "The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software." Sutter is part of the C++ standards committee that is working hard to make multithreading standard in C++."
This discussion has been archived. No new comments can be posted.

Faster Chips Are Leaving Programmers in Their Dust

Comments Filter:
  • OS/2? (Score:5, Interesting)

    by SCHecklerX ( 229973 ) <greg@gksnetworks.com> on Monday December 17, 2007 @01:50PM (#21727218) Homepage
    I remember learning to write software for OS/2 back in the early 90's. Multi-threaded programming was *the* model there, and had it been more popular, it would be pretty much standard practice today, making scaling to multiple cores pretty effortless, I'd think. It's a shame that the single-threaded model became so ingrained in everything, including linux. For an example that comes to mind, why do I need to wait for my mail program to download all headers from the IMAP server before I can compose a new message on initial startup? Same with a lot of things in firefox.

    Does anybody remember DeScribe?
  • Re:2005 Called (Score:4, Interesting)

    by gazbo ( 517111 ) on Monday December 17, 2007 @02:01PM (#21727406)
    In the end, you end up with something that sorts faster than n log (n).

    Not without an infinite number of processors you don't.

  • Personal computing? (Score:5, Interesting)

    by Dan East ( 318230 ) on Monday December 17, 2007 @02:10PM (#21727536) Journal
    "processors with more than eight cores, possible as soon as 2010 -- will transform the world of personal computing"

    Exactly what areas of "personal computing" are requiring this horsepower? The only two that come to mind are games and encoding video. The video encoding part is already covered - that scales nicely to multiple threads, and even free encoders will use the extra cores to their full potential. That leaves gaming, which is basically proprietary. The game engine must be designed so that AI, physics, and other CPU-bound algorithms can be executed in parallel. This has already been addressed.

    So this begs the question, exactly how will average consumer benefit from an OS and software that can make optimum use of multiple cores, when the performance issues users complain about are not even CPU-bound in the first place?

    Dan East
  • Re:The basic problem (Score:2, Interesting)

    by Anonymous Coward on Monday December 17, 2007 @02:11PM (#21727566)

    In addition, most applications these days are not CPU bound. Having eight cores doesn't help you much when three are waiting on socket calls, four are waiting on disk access calls and the last is waiting for the graphics card.
    Processors don't "wait" on blocked IO calls. Your program waits while the processor switches to another task. When the processor switches back to your program, it checks to see if the blocked IO call has completed. If it has, it continues executing your program again. If not, your program continues to wait while the processor again switches to other tasks.
    So it is you (as the programmer) that determines if your program just sits and waits for blocked IO to complete. Or you could spawn a thread for blocked IO calls so your main program thread continues executing (if it is viable to your situation).

    With more processors, your program and its blocked IO calls will be checked more frequently. So even blocked IO calls will see a performance increase.

  • by ILongForDarkness ( 1134931 ) on Monday December 17, 2007 @02:14PM (#21727616)
    Matlab isn't that smart, you still have to tell it that the for loop is parallizable for example. I might be wrong but I don't think Java or C# do either. Their frameworks/VM's supply API's to do multi-threading you simply call into them for the support that you need. C has had pthreads for a long time (since it was standardized?), for some reason the C++ committee's never agreed on an implementation.

    There is a great talk by Bjarne Stroustrup (http://csclub.uwaterloo.ca/media/C++0x%20-%20An%20Overview.html [uwaterloo.ca]) about the new version of C++ coming out and some of the difficulties getting things added. Essentially, if a new feature will only help 100,000 developers, it isn't important enough to be implemented. With such a huge developer community all the "little" things get left for non-standard API implementations, only big, almost everyone will find useful features get added. That is probably why this version or the next of C++ probably will get a standard tread library, because almost everyone has access to a multicore system. Oh yeah, also, and it sucks, anyone with a few thousand dollars to waste can get added to the committee, but most people don't care enough to go get their feature implemented for that much money (you also have the travel/time off to attend the meetings) except big business, so guess who runs the show (I don't expect anyone to be suprised).

  • HPC (Score:2, Interesting)

    by ShakaUVM ( 157947 ) on Monday December 17, 2007 @02:17PM (#21727650) Homepage Journal
    As someone who got a master's in computer science with a focus in high performance computing / parallel processing, and have taught on the subject, *yes*, it does take a bit of work to wrap one's mind around the concept of parallel processing, and to correctly write code with concurrency. But *no*, it's not really that hard. Once you get used to the idea of having computation and communication cycles over a processor geometry, it becomes little more difficult to write parallel code than serial.

    It's like of like when people see recursive functions for the first time. If they don't understand the base condition and inductive step, then they can easily fall into infinite loops or write bugs. Parallel code is the same way... just a bit more tricky.
  • Re:OS/2? (Score:2, Interesting)

    by shoor ( 33382 ) on Monday December 17, 2007 @02:19PM (#21727686)
    I was working at a very small software shop when OS/2 came out. We would get a customer, who wanted something to work on an apollo workstation, another one wanted it for xenix, a third for Unix BSD 4.2 (my favorite), or Unix System V (ugh!), or Dos. So, we got a project to port something to OS/2 version 1.0, and I got it to work, and it used multi-threading which I thought was pretty cute and I was proud of myself for figuring it all out just from the manuals. Then the new revision of OS/2 came out and everything I had done was broken. My boss was so mad he swore off OS/2 forever after that.
  • by Nova1313 ( 630547 ) on Monday December 17, 2007 @02:19PM (#21727690)
    When the first AMD x2 chips came out the linux kernel had issues with the clock on those chips. The clock would be several times (presumably 2 times?) faster then it should be, the cores clocks were not synchronized for some reason or the kernel would lose track... When you typed a letter it would repeat multiple times as you described. :)
  • by MindPrison ( 864299 ) on Monday December 17, 2007 @02:23PM (#21727754) Journal
    It's not easy... especially since things sort of halted at 4 ghz, what on earth am I typing about? Well...picture this...limitations...yes they do exist..and sometimes it's important to think beyond what lies just straight ahead (such as the next cycle speed)...and think into a second...maybe even a 3rd dimmension to expand your communication speed. I have for over 6 years been thinking..of a 3d-dimmension processor that cross communicates over a diagonal matrix instead of the traditional serial and parallel communication model. Imagine this folks...if your code could "walk" across a matrix of 10 x 10 x 10 instead of just 8 x 8 or 64 x 64 if you want...get the picture, no? Imagine that your data could communicate on a 3 dimmensional axis - imagine that you had 10 stacks of cores on top of each other - and instead of just connecting they communication bus to a parallel or a serial model...they could in fact communicate on a diagonel basis... this would make it possible to send commands...data..etc....in a 3d-space rather than just a "queue". This of course...would demand a different "mindset" of coding... everything would have to be written from scratch....though...but the benefits would be tremendeous .....you could 10 fold existing computational speed by increasing the communication across processor-cores...maybe even more! Even by todays technology standards. Ok..ok...sounds far fetched for you doesnt it? Well..get this...this was my invention 6 years ago (maybe even 9 years ago...I am getting older so I dont really care...I do care for freedom of information and sharing...Not so much wealth so listen on)...The theory of what I just wrote here on Slashdot (which has more implication on your life in the future than you will ever be capable of comprehending...yes...I am full of myself aint i....Who cares? You dont know me) .. point is... There was once a missing brick to the idea of diagonal cross matrix computing....with yesteryears technology it just would not be feasible to do it... but ...if you have ANY understanding of what I write here (yes...I am not kidding...this may change history as we know it...and I am drunk right now...and I dont want to keep a lid on it anymore)...here we go... Please think about what I just wrote - and - look up frances hellman's lecture upon magnetic materials in semiconductors...and you WILL have your 4-th link in the 3-B-E-C (base, Emitter, Collector) construction...to make the Cross Matrix Processor possible....just understand this....JoOngle invented this...Frances made it possible - YOU read it from a drunk nobody of Slashdot.org....) now...go make it real!
  • by richieb ( 3277 ) <richieb@@@gmail...com> on Monday December 17, 2007 @02:25PM (#21727798) Homepage Journal
    Check out this article [oreilly.com] on O'Reilly's site. Threads are actually very low level construts (like pointers and manual memory management). Accordingly the future belongs to languages that eliminate threads as a basis for concurrency. See Erlang and Haskell.

  • by caerwyn ( 38056 ) on Monday December 17, 2007 @02:31PM (#21727880)
    This is very, very wrong. Data-set partitioning is certainly one way of achieving parallelism in programming, but it is hardly the only way- nor is it applicable to all domains, as many problems have solutions with too many inter-cell data dependencies. In addition, threads provide a wealth of benefits to application developers by allowing multiple unrelated tasks to be performed simultaneously.

    There is, and will always be, overhead associated with parallelization. It may sound great to say "oh, we can farm out parts of this data set to other cores!", but that requires a lot of start-up and tear-down synchronization. It's not at all uncommon for overall performance to be improved by doing something *unrelated* at the same time, requiring less synchronization overhead.

    Are threads perfect for everything? No. But calling them the second worse thing to happen to computing is, as best, disingenuous.
  • by jskline ( 301574 ) on Monday December 17, 2007 @02:37PM (#21727940) Homepage
    The fact is that programming by and large has gotten lazy, shiftless and sloppy over time and not any better or faster. They really did rely on processing and memory architectures getting faster to overcome their coding bottlenecks. The words; "optimized code" have little or no significance in todays programming shops because of budgets. Because of the push to get stuff out the door as quickly as possible, corners are cut all over the place on many things.

    There once was time when debugging was part of your job. Now; someone else does that and at most, the better coders do some unit testing to ensure their code snippet does what it is supposed to. There generally isn't any "standard" with regard to processes except in some houses that follow *recommended coding guidelines* but these are few and far between. Old school coders had a process in mind to fit a project as a whole and could see the end running program. Many times now, you are to code an algorithm without any regard or concept as to how it might be used. A lot of strange stuff going on out there in the business world with this!

    If there is a fundamental change in the base for C++, et al., this is going to possibly have a detrimental effect on the employment market as there will be many who cannot conceptualize multi-threading methodologies much less modeling some existing processing in this paradigm; and leave the markets.

    I left the programming markets because of the clash of bean counters vs quality, and maybe this will have a telling change in that curve. I always did enjoy some coding over the years and maybe this would make an interesting re-introduction. I have personally not coded in a multi-threading project but have the concepts down. Might be fun!
  • Re:2005 Called (Score:1, Interesting)

    by morgan_greywolf ( 835522 ) on Monday December 17, 2007 @02:43PM (#21728022) Homepage Journal
    The difference between parallel programming and multithreaded programming is this ... with a parallel algorithm, different parts of one task/thread are done on separate CPUs, whereas with multithreaded programming each one thread/task is done entirely on one processor.

    It's not a semantic difference. Threads are basically just lightweight processes...so each thread of a program execution can be thought of as a different process. OTOH, in parallel programming, a thread/task is broken down into pieces and brought back together when the pieces are done. Think SETI@Home, but on a much smaller scale.

    This probably isn't all that useful for writing something like a web server, where it makes some sense to thread off each connection, but for writing a scientific computing application like a simulation or a climate model where you have number crunching that can be done on different subsets of data, you might want to break down those calculations so they occur on different processors. This requires some degree of sophistication as you usually have one part of the calculation that depends on another part and you have to send data back and forth between parts. This involves more than multithreading, but true parallel processing.
  • by steveha ( 103154 ) on Monday December 17, 2007 @02:54PM (#21728202) Homepage
    I know that languages like Erlang and Haskell are better for concurrent programming than more traditional languages. However, so far they have not been as popular as more traditional languages.

    Will the new world of concurrency cause a shift in language popularity? Or will traditional languages remain more popular, perhaps with some enhancements? C++ is gaining concurrency enhancements; C++, Python, and many other languages work well with map/reduce systems like Google MapReduce; and even with no enhancements to the language, you can decompose larger systems into multiple threads or multiple processes to better harness concurrency.

    If you know Haskell and Erlang, please comment: do those languages bring enough power or convenience for concurrency that they will rise in popularity? People grow very attached to their familiar languages and tools; to displace the entrenched languages, alternative languages need to not just be better, they need to be a lot better.

    steveha
  • by ClosedSource ( 238333 ) on Monday December 17, 2007 @03:07PM (#21728360)
    Instead of developing single-core chips with better performance, chip makers are now making multicore machines and expecting developers to provide the extra performance.

    Without the work of developers, multi-core chips will be like the extra transistors in transistor radios in the 1960s: good for marketing but functionally useless.
  • by mosel-saar-ruwer ( 732341 ) on Monday December 17, 2007 @03:18PM (#21728578)

    my current major language (Igor pro) will use all the cores automatically, and how many languages do multithread this way? Matlab(?), Octave(?)

    LabVIEW, by its very nature [which is graphical - based on "G" - the "Graphical" programming language] is kinda/sorta topologically self-threading: If a piece of LabVIEW code sits off in its own connected component, then [more or less] it gets its own thread.

    Of course, all your ".h" & ".c" [or ".cc"] files [& their innards] might very well break down into little distinct connected components which are ripe for running their own threads, it's just that you can't - unless you're some sort of a super genius - you can't readily visualize all those connected components as they exist in your code.

    Now you and your colleagues could try to anticipate the connected components a priori, during the "planning" phase: You could draw huge pictures on the dry-erase board, and everyone could yell and scream at each other about the topological structure which the code should ultimately embody, and then everyone would have to promise - Scout's Honor! - that they would stick to the blueprint [which they might very well resent as having been shoved down their throats by some pointed-headed suit who didn't have any clue what he was talking about] - but the beauty of LabVIEW is that THE CODE IS THE BLUEPRINT [which I think is a point that Jack Reeves used to make [c2.com]].

    There's actually a Slashdotter, MOBE2001 [slashdot.org], who maintains a blog called Rebel Science News [blogspot.com], who's got some pretty interesting ideas here - he seems to be leaning towards a graphical approach to this [rebelscience.org] [realizing that the fundamental nature of the problem tends to be topological, rather than anything which we (YET!) would recognize as semantic], but his program is very, very ambitious [if I had a couple of spare lifetimes, I must just throw one in that general direction].

    Another line of thought which everyone should keep an eye on is the discipline of Petri nets [wikipedia.org] - it's kinduva big graphical/topological approach to state machines, which [if someone were to put the necessary elbow grease into it] might prove to be very useful in squeezing the most bang for the buck out of these massively-multicore CPU's.

  • by athloi ( 1075845 ) on Monday December 17, 2007 @03:21PM (#21728658) Homepage Journal
    When I first started programming, in BASIC on an Apple ][ (not IIe), I remember being baffled by the fact that the computer did not operate with multiple concurrent streams [blogspot.com]. To me, this seemed the point of making something that was "more than a calculator," and the only way we would be able to do the really interesting stuff with it.

    When I first started writing object-oriented code, I was somewhat dismayed to find that OO was an extension to the same ol' linear programming. It seemed to me that objects should be able to exist as if alive and react freely, but really, they were just a fancy interface to the linear runtime. Color me disapointed yet again.

    It's an important paradigm shift [chrisblanc.org] to recognize parallel computing. Maybe when the world realizes the importance of parallel computing, and parallel thinking, we'll have that singularity that some writers talk about. People will no longer think in such basic terms and be so ignorant of context and timing. That in itself must be nice.

    Sutter's article hits home with all of this. His conclusion is that efficient programming, and elegant programming that takes advantage of, not conforms to, the parallel model is the future. Judging by the chips I see on the market today, he was right, 2.5 years ago. He will continue to be right. The question is whether programmers step up to this challenge, and see it as being as fun as I think it will be.
  • Re:2005 Called (Score:2, Interesting)

    by gazbo ( 517111 ) on Monday December 17, 2007 @03:57PM (#21729506)
    Indeed, sorting is highly parallelisable. My point was that you can't change general algorithmic complexity by adding k more processor cores.

    Regarding the other "he said n log n int O(n log n)" comment...well, that's already been answered (and with considerably more tact than I would have used).

  • Re:2005 Called (Score:4, Interesting)

    by AuMatar ( 183847 ) on Monday December 17, 2007 @03:59PM (#21729558)
    Yes it is something the app developer needs to deal with. The problem isn't creating threads- the OS does that. The problem is in protecting data from concurrent unsynchronized access (this needs to be done by the app, as the OS has no clue what can/can't be accessed synchronously, that's an application detail) and parallelizing algorithms (again, not something the OS can do).

    Eventually, good parallel algorithm libraries will pop up. That will help some subset of problems. I'd expect frameworks to pop up as well, helping another. But in many cases it just comes down to changing how we write programs.

    And you're right, this isn't really a desktop issue- its mainly a server one. Desktops really don't need all the power they have now, perhaps one percent of users outside of gamers actually use it. That doesn't make it any less important of a problem to solve. Although I expect in the end people will still end up disappointed- parallelization is not magic pixie dust, you can only get so much of a speedup. I wouldn't be surprised if those 8 corse only give a 2x speedup over a single core on many apps.
  • by John C Peterson ( 1204520 ) <jpeterson@western.edu> on Monday December 17, 2007 @04:06PM (#21729748) Homepage
    The purely functional approach has a lot of merit. But functional programming by itself probably won't solve the big problems. Erlang has a very specific approach to threads and parallelism that works very well when it's appropriate. A more general approach is taken in Concurrent Haskell (http://en.wikipedia.org/wiki/Concurrent_Haskell [wikipedia.org]), in which a Software Transactional Memory (STM) replaces the lower level mechanisms such as locks that are so crucial to threaded programming. I expect the real breakthrough to occur when high level concurrency tools like STM come into use to replace the existing parallel programming framework of threads and locks. There was a time when everyone assumed that automatic memory management was "too high level / slow / buggy" to be practical in a real programming language but now most programmers are happy to build their programs without worrying about memory allocation. In the parallel world, threads and locks are the malloc / free of the past and something like STM could well be the basis for a higher level approach that will make concurrency a natural way to program.
  • Re:Thank god (Score:3, Interesting)

    by gbjbaanb ( 229885 ) on Monday December 17, 2007 @04:47PM (#21730512)
    Yeah, but they do it really slowly 'cos you're tied to the framework that has to do it safely no matter what - even if you have 2 threads that never interact with each other, the framework will slap synchronisation all over them anyway.

    (I know - I had a discussion with a chap about C# thread-safe singleton initialisation. A simple app to test performance on my little laptop had a static initialised singleton taking 1.5 seconds, lock-based initialisation in 6 seconds. No big deal, we expect that, but then I ran the same tests on a dual-CPU server and both apps took 30 seconds - the framework decided it knew best).
  • by master_p ( 608214 ) on Monday December 17, 2007 @05:25PM (#21731118)
    The cure for solving all of parallel programming problems (deadlocks, priority inversion etc) is the Actor model: each object is a separate thread, and calling a method does not invoke code, it only puts a request in the message queue of the called object. Then the thread behind the object wakes up and processes the requests.

    If an object wants a result from another object, then it obtains a future value that represents the result of the computation when it will be ready. When the caller wants the actual value, it blocks until the result is available.

    Of course, blocking on a result would cause a deadlock in recursive algorithms...therefore, objects don't wait for a result, they simply enter a new message loop at the position they wait for a result. When the result is ready, the callee wakes up the caller by putting a 'terminate current loop' message in the caller's message loop after the result is computed.

    The Actor model, implemented as described above, not only solves the problems of classical parallel programming (deadlocks, priority inversion, etc), but it also exposes whatever parallelism is there in a program.

    Synchronization is performed only in two places:

    1) when inserting/removing elements in an object's queue.
    2) when adding the current thread into the waiting list of a future value.

    Both synchronizations are implemented via spinlocks. In the case of the queue, there is no need to synchronize on all the queue, just on the edges.

    I have made a demo in C++, using Boehm's garbage collector (it is a quite complex system, it needs gc), and it works beautifully. With this model, there is no need to use mutexes, semaphores, wait conditions, or any other synchronization primitive.

    I chose C++ because:

    1) operator overloading allows future values to be treated naturally like non-future values.
    2) when waiting for a result, the waiting thread puts itself in the waiting list of the future. The nodes of the list are allocated on the stack; only c/c++ can do this, and it is crucial, because it minimizes allocation.

    Another advantage of this system is that tail recursion comes for free: when you call a method which you don't want the result of, the local stack is not exhausted, because there is no call, only a message placed in a queue.

    Patterns like the producer/consumer pattern come for free: one object simply invokes the other.

    Data parallelism comes for free: invoking a computation on an array of objects will execute the computations in parallel, on each element of the array. For example, increasing the elements of an array can take O(N) with one CPU and O(1) with N cpus.

    Of course, it is much slower on two or even four cores than the same sequential code. But given 10 or more cores, programs start to exhibit linear increase in performance, depending on algorithm of course.

    The system is much like the nervous system of an animal: signals are transmitted slowly from one nerve to another, but processing is parallel, so the organism can do many things at the same time.

    Another similarity between this system and the nervous system of an animal is that when a nerve wants to transmit an electrical signal to another nerve, the nerves must synchronize, much like there should be synchronization when an object puts a message in the object of another thread.
  • Re:2005 Called (Score:2, Interesting)

    by AuMatar ( 183847 ) on Monday December 17, 2007 @08:19PM (#21733148)
    Very few people develop programs. Even then, 99% of development is basicly 0 load. Only compilation takes time, and that's a small portion on it with any decent build script. Unless you're developing something truely huge, like the Linux kernel, it just isn't an issue.

    Editing video is also a niche use case. Not doing it myself, I won't comment on how much resources it really requires. But 1 in 20 or 50 people do this regularly, if that.

    Touching up photos takes almost no resources on a modern machine. People were doing this efficiently on 1GHZ and lower processors.

    Browsing, even with multiple windows and tabs and AJAX is very very low resource usage. My EEEPC handles it easily, despite being a 900 MHZ celeron UNDERCLOCKED to a 600 MHZ speed.

    Where the hell did I mention moving anything off the desktop? I think web based apps suck for the most part. My point was that most computers are at 1% load or less most of the time, and sit at 30% load or less while in active use, except for those machines being used for video games. You seem to have some hugely inflated idea of what type of resources things actually use, the common email, web surfing, office use case can easily be handled by a 300MHz pentium 2, if not less.
  • Re:OS/2? (Score:4, Interesting)

    by TheRaven64 ( 641858 ) on Monday December 17, 2007 @08:38PM (#21733286) Journal

    I'm of the opposite opinion; it's a shame that so many people equate parallel processing with threads.
    I read that and wished I had mod points. Anyone who has programmed with a language designed for concurrency, like Erlang, Termite, or a few Haskell dialects hates using threads. Threads are something that two kinds of people should use; operating system designers and compiler writers. Everyone else should be using a higher-level abstraction.

    The big problem is not the operating system designers, it's the CPU designers. They integrated two orthogonal concepts, protection and translation, into the same mechanism (page tables, segment tables, etc). The operating system wants to do translation so it can implement virtual memory. The userspace program wants to do protection so it can use parallel contexts efficiently. Mondrian memory protection would fix this, but no one has implemented it in a commercial microprocessor (to my knowledge).

  • by dbIII ( 701233 ) on Monday December 17, 2007 @08:54PM (#21733404)
    Even a child's toy like the Nintendo DS from 2004 has two cores. Developers need to remember it isn't the early 1990s anymore and that they will have to deal with multiprocessor machines.
  • Re:2005 Called (Score:3, Interesting)

    by joto ( 134244 ) on Tuesday December 18, 2007 @01:23AM (#21735148)
    Actually, Itanium was a fairly good idea. That it didn't work out, could just as well be ascribed to politics and real-world issues, as to technical issues. For example, the requirements said it should be able to run x86 unmodified (why? if you want x86 you already know where to get it, right?). It was oversold (the next desktop processor), and underperformed (late delivery, bad performance). None of these issues indicate that explicit instruction level paralellism (EPIC) is a bad idea. And they certainly have good Itanium compilers now. The main problem with Itanium (apart from the initial delays) was that it was a solution in search of a problem. It still is. But what a marvellous solution!
  • Re:Erlang (Score:1, Interesting)

    by Anonymous Coward on Tuesday December 18, 2007 @04:28AM (#21736368)
    The good thing about Erlang is that developers don't need to understand concurrency. This is especially beneficial for ErlyWeb, where the author literally has no clue.

    I'm not trying to be mean, but talking with him he really doesn't understand it and I think has never programmed anything using multi-threads. At first I was really scared - finding out that Erlang enthusiasts just couldn't grasp the basics, until I read Armstrong's book and realized that they didn't need to. Joe gets it and the language hides it so well from you that you just don't need to understand it whatsoever. Just like Java developers don't need to understand pointers and OO programmers can (often) avoid recursion, Erlang coders just don't need to deal with it. I'm still a bit put off, and I wouldn't let the author near any infrustructural code in a threaded language, but Erlang really has become the VB of concurrency. Quite a feat.
  • High level language (Score:3, Interesting)

    by oliderid ( 710055 ) on Tuesday December 18, 2007 @07:14AM (#21737008) Journal
    I guess it will a dumb question but:

    Why a Java virtual machine can't take the burden of the multi-core adaptation?

    They have promised "write once run anywhere"!

    Lazy coder :-)

  • Re:Thank god (Score:3, Interesting)

    by CoughDropAddict ( 40792 ) * on Tuesday December 18, 2007 @02:20PM (#21741280) Homepage
    Essentially, they turn
    for (int i = 0; i < 100; i++) {
            a[i] = a[i]*a[i];
    }

    into

    Parallel.For(0, 100, delegate(int i) {
            a[i] = a[i]*a[i];
    });

    and the hint tells the .NET runtime to execute the solution in parallel. No shared memory, no locks, all done for you. That's the way parallelism should work, IMHO


    So let me get this straight: the runtime is going to
    1. find one or more other threads to farm this work out to, either by creating new ones or taking them from an existing pool
    2. make the thread(s) runnable, and wait for them to get scheduled by the OS
    3. coordinate the communication between the main thread and the other thread(s) about what part of the solution each thread should work on
    ...and this is supposed to be faster than a simple for loop with 100 iterations?

    Sounds like a losing proposition to me. I don't think this is the kind of parallelism that is going to bring noticeable gains.

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...