Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming Apple

Slashdot's Interview With Swift Creator Chris Lattner 85

You asked, he answered! The creator of Apple's Swift programming language (and a self-described "long-time reader/fan of Slashdot") stopped by on his way to a new job at Tesla just to field questions from Slashdot readers. Read on for Chris's answers...
Questions, and my best wishes.
by Volanin

Since you're the creator of LLVM, I'd like to know, in your opinion what's the greatest advantage of LLVM/Clang over the traditional and established GNU GCC compiler. Also, what's the greatest advantage of GNU GCC (or if you'd prefer, any other compiler) over LLVM/Clang, something that you'd like to "port" someday?

CL: GCC and LLVM/Clang have had a mutually beneficial relationship for years. Clang's early lead in terms of compiler error message expressivity has led the GCC developers to improve their error messages, and obviously GCC provides a very high bar to meet when LLVM and Clang were being brought up. It is important to keep in mind that GCC doesn't have to lose for Clang to win (or visa versa). Also, I haven't used GCC for many years, that said, since you asked I'll try to answer to the best of my knowledge:

From my understanding, for a C or C++ programmer on Linux, the code generated by GCC and Clang is comparable (each win some cases and lose other cases). Both compilers have a similar feature set (e.g. OpenMP is supported by both). Clang compiles code significantly faster than GCC in many cases, still generates slightly better errors and warning messages than GCC, and is usually a bit ahead in terms of support for the C++ standard. That said, the most significant benefit is improved compile time.

Going one level deeper, the most significant benefit I see of the LLVM optimizer and code generator over GCC is its architecture and design. LLVM is built with a modular library-based design, which has allowed it to be used in a variety of ways that we didn't anticipate. For example, it has been used by movie studios to JIT compile and optimize shaders used in special effects, has been used to optimize database queries, and LLVM is used as the code generator for a much wider range of source languages than GCC.

Similarly, the most significant benefit of Clang is that it is also modular. It specifically tackles problems related to IDE integration (including code completion, syntax highlighting, etc) and has a mature and vibrant tooling ecosystem build around it. Clang's design (e.g. lack of pervasive global variables) also allows it to be used at runtime, for example in OpenCL and CUDA implementations.

The greatest advantage I can see of GCC over LLVM/Clang is that it is the default compiler on almost all Linux distributions, and of course Linux is an incredibly important for developers. I'd love to see more Linux distributions start shipping Clang by default. Finally, GCC provides an Ada frontend, and I'm not aware of a supported solution for Ada + LLVM.

Future of LLVM?
by mveloso

Where do you see LLVM going?

CL: There are lots of things going on, driven by the amazing community of developers (which is growing like crazy) driving forward the llvm.org projects. LLVM is now used pervasively across the industry, by companies like Apple, Intel, AMD, Nvidia, Qualcomm, Google, Sony, Facebook, ARM, Microsoft, FreeBSD, and more. I'd love for it to be used more widely on Windows and Linux.

At the same time, I expect to see LLVM to continue to improve in a ton of different ways. For example, ThinLTO is an extremely promising approach that promises to bring scalable link time optimization to everyone, potentially replacing the default for -O3 builds. Similarly, there is work going on to speed up compile times, add support for the Microsoft PDB debug information format, and too many other things to mention here. It is an incredibly exciting time. If you're interested in a taste of what is happening, take a look at the proceedings from the recent 2016 LLVM Developer Meeting.

Finally, the LLVM Project continues to expand. Relatively recent additions include llgo (a Go frontend) and lld (a faster linker than "Gold"), and there are rumors that a Fortran frontend may soon join the fold.

The Mythical Compiler -VLIW
by Anonymous Coward

Is there any hope for VLIW architectures? The general consensus seems to be that Itanium tanked because the compiler technology wasn't able to make the leap needed. Linus complained about the Itanium ISA exposing the pipelines to assembly developers. What are the challenges from a compiler writers perspective with VLIW?

CL: I can't speak to why Itanium failed (I suspect that many non-technical issues like business decisions and schedule impacted it), but VLIW is hardly dead. VLIW designs are actively used in some modern GPUs and is widely used in DSPs - one example supported by LLVM is the Qualcomm Hexagon chip. The major challenge when compiling for a VLIW architecture is that the compiler needs good profile information, so it has an accurate idea of the dynamic behavior of the program.

How much of Swift is Based on Groovy?
by Anonymous Coward

So how much of Swift was inspired by Groovy? Both come from more high-end languages and look and act almost identical.

CL: It is an intentional design point of Swift that it look and feel "familiar" to folks coming from many other languages: not just Groovy. Feeling familiar and eliminating unnecessary differences from other programming languages is a way of reducing barriers of entry to start programming in Swift. It is also clearly true that many other languages syntactically influence each other, so you see convergence of ideas coming from many different places.

That said, I think it is a stretch to say that Swift and Groovy look and act "identical", except in some very narrow cases. The goal of Swift is simply to be as great as possible, it is not trying to imitate some other programming language.

C#
by Anonymous Coward

What do you think about Microsoft and C# versus the merits of Swift?

CL: I have a ton of respect for C#, Rust, and many other languages, and Swift certainly benefited by being able to observe their evolution over time. As such, there are a lot of similarities between these languages, and it isn't an accident.

Comparing languages is difficult in this format, because a lot of the answers come down to "it depends on what you're trying to do", but I'll give it a shot. C# has the obvious benefit of working with the .NET ecosystem, whereas Swift is stronger at working in existing C and Objective-C ecosystems like Cocoa and POSIX.

From a language level, Swift has a more general type system than C# does, offers more advanced value types, protocol extensions, etc. Swift also has advantages in mobile use cases because ARC requires significantly less memory than garbage collected languages for a given workload. On the other hand, C# has a number of cool features that Swift lacks, like async/await, LINQ, etc.

Rust
by Anonymous Coward

Chris, what are your general thoughts about Rust as a programming language?

CL: I'm a huge Rust fan: I think that it is a great language and its community is amazing. Rust has a clear lead over Swift in the system programming space (e.g. for writing kernels and device drivers) and I consider it one of the very few candidates that could lead to displacing C and C++ with a more safe programming language.

That said, Swift has advantages in terms of more developers using it, a more mature IDE story, and offers a much shallower learning curve for new developers. It is also very likely that a future version of Swift will introduce move-only types and a full ownership model, which will make Swift a lot more interesting in the system programming space.

BASIC
by jo7hs2

As someone who has been involved with the development of programming languages, do you think it is still possible to come up with a modern-day replacement for BASIC that can operate in modern GUI environments?

It seems like all attempts since we went GUI (aside from maybe early VisualBASIC and Hypercard) have been too complicated, and all attempts have been platform-specific or abandoned. With the emphasis on coding in schools, it seems like it would be helpful to have a good, simple, introductory language like we had in BASIC.


CL: It's probably a huge shock, but I think Swift could be this language. If you have an iPad, you should try out the free Swift Playgrounds app, which is aimed to teach people about programming, assuming no prior experience. I think it would be great for Swift to expand to provide a VisualBASIC-like scripting solution for GUI apps as well.

Cross-platform
by psergiu

How cross-platform is Swift? Are the GUI libraries platform-dependent or independent? I.E.: Can I write a single Swift program with a GUI that will compile, work the same and look similar on multiple platforms: Linux, Mac OS, Real Unix-es & BSDs, AIX, Windows?

CL: Swift is Open Source, has a vibrant community with hundreds of contributors, and builds on the proven LLVM and Clang technology stack. The Swift community has ported Swift itself to many different platforms beyond Apple platforms: it runs on various Linux distros and work is underway by various people to port it to Android, Windows, various BSDs, and even IBM mainframes.

That said, Swift does not provide a GUI layer, so you need to use native technologies to do so. Swift helps by providing great support for interoperating with existing C code (and will hopefully expand to support C++ and other languages in the future). It is possible for someone to design and build a cross platform GUI layer, but I'm not aware of any serious efforts to do so.

Exception Handling
by andywest

Why did Swift NOT have exception handling in the first couple of versions?

CL: Swift 1 (released in 2014) didn't include an error handling model simply because it wasn't ready in time: it was added in Swift 2 (2015). Swift 2 included a number of great improvements that weren't ready for Swift 1, including protocol extensions. Protocol extensions dramatically expanded the design space of what you can do with Swift, bringing a new era of "protocol oriented programming". That said, even Swift 3 (2016) is still missing a ton of great things we hope to add over the coming years: there is a long and exciting road ahead! See questions below for more details.

Why are strings passed by value?
by Anonymous Coward

Strings are immutable pass-by-reference objects in most modern languages. Why did you make this decision?

CL: Swift uses value semantics for all of its "built in" collections, including String, Array, Dictionary, etc. This provides a number of great advantages by improving efficiency (permitting in-place updating instead of requiring reallocation), eliminating large classes of bugs related to unanticipated sharing (someone mutates your collection when you are using it), and defines away a class of concurrency issues. Strings are perhaps the simplest of any of these cases, but they get the efficiency and other benefits.

If you're interested in more detail, there is a wealth of good information about the benefits of value vs reference types online. One great place to start is the "Building Better Apps with Value Types in Swift" talk from WWDC 2015.

As a language designer
by superwiz

Since you have been involved with 2 lauded languages, you are in a good position to answer the following question: "are modern languages forced to rely on language run-time to compensate for the facilities lacking in modern operating systems?" In other words, have the languages tried to compensate for the fact that there are no new OS-level light-weight paradigms to take advantage of multi-core processors?

CL: I'm not sure exactly what you mean here, but yes: if an OS provides functionality, there is no reason for a language runtime to replicate it, so runtimes really only exist to supplement what the OS provides. That said, the line between the OS and libraries is a very blurry one: Grand Central Dispatch (GCD) is a great example, because it is a combination of user space code, kernel primitives, and more all designed together.

Parallelism
by bill_mcgonigle

Say, about fifteen years ago, there was huge buzz about how languages and compilers were going to take care of the "Moore's Law Problem" by automating the parallelism of every task that could be broken up. With single-static assignment trees and the like the programmer was going to be freed from manually doing the parallelism.

With manufacturers starting to turn out 32- and 64-core chips, I'm wondering how well did we did on that front. I don't see a ton of software automatically not pegging a core on my CPU's. The ones that aren't quite as bad are mostly just doing a fork() in 2017. Did we get anywhere? Are we almost there? Is software just not compiled right now? Did it turn out to be harder than expected? Were languages not up to the task? Is hardware (e.g. memory access architectures) insufficient? Was the possibility oversold in the first place?


CL: I can't provide a global answer, but clearly parallelism is being well utilized in some "easy" cases (e.g. speeding up build time of large software, speed of 3d rendering, etc). Also, while large machines are available, most computers are only running machines with 1-4 cores (e.g. mobile phones and laptops), which means that most software doesn't have to cope with the prospects of 32-core machines⦠yet.

Looking forward, I am skeptical of the promises of overly magic solutions like compiler auto-parallelization of single threaded code. These "heroic" approaches can work on specific benchmarks and other narrow situations, but don't lead to a predictable and reliable programmer model. For example, a good result would be for you to use one of these systems and get a 32x speedup on your codebase. A really bad result would be to then make a minor change or bug fix to your code, and find out that it caused a 32x slowdown by defeating the prior compiler optimization. Magic solutions are problematic because they don't provide programmer control.

As such, my preferred approach is for the programming language to provide expressive ways for the programmer to describe their parallelism, and then allow the compiler and runtime to efficiently optimize it. Things like actor models, OpenMP, C++ AMP, and goroutines seem like the best approach. I expect concurrency to be an active area of development in the Swift language, and hope that the first pieces of the story will land in Swift 5 (2018).

Any insight into language design choices?
by EMB Numbers

I am a 25+ year Objective-C programmer and among other topics, I teach "Mobile App Development" and "Comparative Languages" at a university. I confess to being perplexed by some Swift language design decisions. For example,
  • Swift is intended to be a "Systems Programming Language", is it not? Yet, there is no support for "volatile" variables needed to support fundamental "system" features like direct memory access from peripheral hardware.
  • Why not support "dynamic runtime features" like the ones provided by the Objective-C language and runtime? It's partly a trick question because Swift is remarkably "dynamic" through use of closures and other features, but why not go "all the way?"

CL: These two questions get to the root of Swift "current and future". In short, I would NOT say that Swift is an extremely compelling systems programming or scripting language today, but it does aspire to be great for this in the years ahead. Recall that Swift is only a bit over two years old at this point: Swift 1.0 was released in September 2014.

If you compare Swift to other popular contemporary programming languages (e.g. Python, Java/C#, C/C++, Javascript, Objective-C, etc) a major difference is that Swift was designed for generality: these languages were initially designed for a specific niche and use case and then organically growing out.

In contrast, Swift was designed from the start to eventually span the gamut from scripting language to systems programming, and its underlying design choices anticipate the day when all the pieces come together. This is no small feat, because it requires pulling together the strengths of each of these languages into a single coherent whole, while balancing out the tradeoffs forced by each of them.

For example, if you compare Swift 3 to scripting languages like Python, Perl, and Ruby, I think that Swift is already as expressive, teachable, and easy to learn as a scripting language, and it includes a REPL and support for #! scripts. That said, there are obvious missing holes, like no regular expression literals, no multi-line string literals, and poor support for functionality like command line argument processing - Swift needs to be more "batteries included".

If you compare Swift 3 to systems programming languages with C/C++ or Rust, then I think there is a lot to like: Swift provides full low-level memory control through its "unsafe" APIs (e.g. you can directly call malloc and free with no overhead from Swift, if that is what you want to do). Swift also has a much lighter weight runtime than many other high level languages (e.g. no tracing Garbage Collector or threading runtime is required). That said, it has a number of holes in the story: no support for memory mapped I/O, no support for inline assembly, etc. More significantly, getting low-level control of memory requires dropping down to the Unsafe APIs, which provide a very C/C++ level of control, but also provides the C/C++ level lack of memory safety. I'm optimistic that the ongoing work to bring an ownership model to Swift will provide the kinds of safety and performance that Rust offers in this space.

If you compare Swift 3 to general application level languages like Java, I think it is already pretty great (and has been proven by its rapid adoption in the iOS ecosystem). The server space has very similar needs to general app development, but both would really benefit from a concurrency model (which I expect to be an important focus of Swift 5).

Beyond these areas of investment there is no shortage of ideas for other things to add over time. For example, the dynamic reflection capabilities you mention need to be baked out, and many people are interested in things like pre/post-conditions, language support for typestate, further improvements to the generics system, improved submodules/namespacing, a hygienic macro system, tail calls, and so much more.

There is a long and exciting road ahead for Swift development, and one of the most important features was a key part of Swift 3: unlike in the past, we expect Swift to be extremely source compatible going forward. These new capabilities should be addable to the language and runtime without breaking code. If you are interested in following along or getting involved with Swift development, I highly encourage you to check out the swift-evolution mailing list and project on GitHub.

Any hope for more productive programming?
by Kohath

I work in the semiconductor industry and our ASIC designs have seen a few large jumps in productivity:

  • Transistors and custom layouts transitioned to standard cell flows and automated P&R.
  • Design using logic blocks transitioned to synthesized design using RTL with HDLs.
  • Most recently, we are synthesizing circuits directly from C language.

In the same timeframe, programming has remained more or less the same as it always was. New languages offer only incremental productivity improvements, and most of the big problems from 10 or 20 years ago remain big problems.

Do you know of any initiatives that could produce a step-function increase (say 5-10x) in coding productivity for average engineers?

CL: There have been a number of attempts to make a "big leap" in programmer productivity over the years, including visual programming languages, fourth-generation" programming languages, and others. That said, in terms of broad impact on the industry, none of these have been as successful as the widespread rise of "package" ecosystems (like Perl's CPAN, Ruby Gems, NPM in Javascript, and many others), which allow rapid reuse of other people's code. When I compare the productivity of a modern software developer using these systems, I think it is easy to see a 10x improvement in coding productivity, compared to a C or C++ programmer 10 years ago.

Swift embraces this direction with its builtin package manager "SwiftPM". Just as the Swift language learns from other languages, SwiftPM is designed with an awareness of other package ecosystems and attempts to assemble their best ideas into a single coherent vision. SwiftPM also provides a portable build system, allowing development and deployment of cross-platform Swift packages. SwiftPM is still comparatively early on in design, but has some heavy hitters behind it, particularly those involved in the various Swift for the Server initiatives. You might also be interested in the online package catalog hosted by IBM.

Looking ahead, even though a bit cliche, I'd have to say that machine learning techniques (convolutional neural nets and deep neural nets for example) really are changing the world by making formerly impossible things merely "hard". While it currently seems that you need a team of Ph.D's to apply and develop these techniques, when they become better understood and developed, I expect them to be more widely accessible to the rest of us. Another really interesting recent development is the idea of "word vectors," which is a pretty cool area to geek out on.

This discussion has been archived. No new comments can be posted.

Slashdot's Interview With Swift Creator Chris Lattner

Comments Filter:
  • C# vs Swift (Score:5, Interesting)

    by phantomfive ( 622387 ) on Monday January 23, 2017 @05:05AM (#53719433) Journal

    From a language level, Swift has a more general type system than C# does, offers more advanced value types, protocol extensions, etc. Swift also has advantages in mobile use cases because ARC requires significantly less memory than garbage collected languages for a given workload.

    I feel like this should be quoted any time a C# programmer comes along thinking they have the perfect language (unaware of what else is out there). C# is great, but it's not the greatest possible language.

    • Just about anyone saying they only need one language is an instant tell of a moron. This isn't unique to C# or Swift developers.
    • by Ed Avis ( 5917 )
      It would be interesting to see whether Swift can target the .NET virtual machine (or, indeed, the Java one). How many of the limitations of C# and Java compared to Swift are purely language design issues, and how many are more or less imposed by the runtime?
      • by Faw ( 33935 )

        There is RemObject Elements [elementscompiler.com]. It compiles C#/Swift/Delphi and soon also Java to Net/IOS/Java/MacOS/Windows/Linux executables. The windows/linux compiling was is new. Is really cool stuff.

    • Re:C# vs Swift (Score:5, Insightful)

      by TheRaven64 ( 641858 ) on Monday January 23, 2017 @06:23AM (#53719571) Journal

      I'm not convinced by Chris' argument here. GC is an abstract policy (objects go away after they become unreachable), ARC is a policy (GC for acyclic data structures, deterministic destruction when the object is no longer reachable) combined with a mechanism (per object refcounts, refcount manipulation on every update). There is a huge design space for mechanisms that implement the GC policy and they all trade throughput and latency in different ways. It would be entirely possible to implement the C# GC requirements using ARC combined with either a cycle detector or a full mark-and-sweep-like mechanism for collecting cycles. If you used a programming style without cyclic data structures then you'd end up with almost identical performance for both.

      Most mainstream GC implementations favour throughput over latency. In ARC, you're doing (at least) an atomic op for every heap pointer assignment. In parallel code, this leads to false sharing (two threads updating references to point to the same object will contend on the reference count, even if they're only reading the object and could otherwise have it in the shared state in their caches). There is a small cost with each operation, but it's deterministic and it doesn't add much to latency (until you're in a bit of code that removes the last reference to a huge object graph and then has to pause while they're all collected - one of the key innovations of OpenStep was the autorelease pool, which meant that this kind of deallocation almost always happens in between runloop iteration). A number of other GC mechanisms are tuned for latency, to the extent that they can be used in hard realtime systems with a few constraints on data structure design (but fewer than if you're doing manual memory management).

      This is, unfortunately, a common misconception regarding GC: that it implies a specific implementation choice. The first GC papers were published almost 60 years ago and it's been an active research area ever since, filling up the design space with vastly different approaches.

      • Garbage Collection (Score:5, Interesting)

        by aberglas ( 991072 ) on Monday January 23, 2017 @06:40AM (#53719599)

        There were experiments done in the 1970s for Lisp systems that showed that ARC was generally the slowest garbage collection algorithm, despite what C++ programmers think. You pay for every pointer move, rather than just once at GC. And the 1980s generational systems were even better.

        I do not know Swift, but C compatibility is blown out of the water when a GC can actually move objects in memory, as it should.

        • For some real time needs, it is more important to have the "payment" spread all over in a determinstic way rather than appearing all at once at a random point in time.

          • by Megol ( 3135005 )

            If so then use a real-time GC algorithm.

        • by BasilBrush ( 643681 ) on Monday January 23, 2017 @11:07AM (#53720561)

          If there were experiments for LISP in the 70s it'll have very little to say about what an optimising compiler does now. Indeed much of Swift's ARC doesn't involve any actual reference counting at run time, as static analysis has already determined when many objects can be deleted.

        • Who says C++ programmers think ARC is fastest? The fastest is automatic stack based memory management, what with being free and all, and that's what's probably used most of all in C++.

          • Good point, and C#/Java programmers rarely use that for objects.
            • Good point, and C#/Java programmers rarely use that for objects.

              As far as I know you can't in those languages. The optimizing JIT does escape analysis which attempts it where possible, but it's never going to be as effective as C++ in that regard.

        • There's some more issues to it than that.
          C++ is more efficent because it can use the stack more and can store primitive types in stl collections directly.
          Language's like c# and java have to instantiate objects on the heap always and primitive type can't be store in collection classes.
          Then, look at the amount of memory that the runtime uses, it's far, far more than any c++ exe. For instance, run a empty c# Unity project and you've blown 50mb of RAM already.

        • You pay for every pointer move........when a GC can actually move objects in memory

          That's not actually a common need

        • I can't speak for the experiments you're referring to specifically, but I've seen studies make the mistake of comparing a 1:1 ratio of std::shared_ptr objects to garbage collected references in other languages. I hate these comparisons because it overlooks the fact that garbage collected languages are so enamored by their own GC firepower that they insist on creating garbage all over the place. Outside of the base primitives, pretty much everything becomes an object that is individually tracked on the heap.

        • C++ takes a more general approach. RAII with smart pointers can and should be used to manage any resource, including memory, database connections, files, whatever. This is not normally the best approach for memory, and reference counts have problems when you get into multithreading and caching, since they force another memory reference, and possibly a memory write, somewhere other than the actual variable. Like most things in language design, it's a tradeoff, and most modern languages have chosen more s

      • by Anonymous Coward

        You make some good points, but it's worth noting that:

        1. While you're completely correct in theory, if all of the mainstream implementations currently work in the assumed fashion then his point is still reasonably valid.

        2. ARC is designed to minimise the refcount editing / false sharing. How well this goes in practice depends on the static analysis capabilities of the compiler; it will never be perfect but with a good compiler and a decent programmer it can probably be very good. It's certainly much better

        • Re:C# vs Swift (Score:4, Interesting)

          by TheRaven64 ( 641858 ) on Monday January 23, 2017 @08:56AM (#53719945) Journal

          1. While you're completely correct in theory, if all of the mainstream implementations currently work in the assumed fashion then his point is still reasonably valid.

          The mainstream implementations optimise for a particular point in the tradeoff space because they're mainstream (i.e. intended to be used in that space). GCs designed for mobile and embedded systems work differently.

          2. ARC is designed to minimise the refcount editing / false sharing

          I've worked on the ARC optimisations a little bit. They're very primitive and the design means that you often can't elide the count manipulations. The optimisations are really there to simplify the front end, not to make the code better. When compiling Objective-C, clang is free to emit a retain for the object that's being stored and a release for the object that's being replaced. It doesn't need to try to be efficient, because the optimisers have knowledge of data and control flow that the front end lacks and so it makes sense to emit redundant operations in the front end and delete them in the middle. Swift does this subtly differently by having its own high-level IR that tracks dataflow, so the front end can feed cleaner IR into LLVM.

          The design of ARC does nothing to reduce false sharing. Until 64-bit iOS, Apple was storing the refcount in a look-aside table (GNUstep put it in the object header about 20 years before Apple). This meant that you were acquiring a lock on a C++ map structure for each refcount manipulation, which made it incredibly expensive (roughly an order of magnitude more expensive than the GNUstep implementation).

          Oh, and making the same optimisations work with C++ shared_ptr would be pretty straightforward, but it's mostly not needed because the refcount manipulations are inlined and simple arithmetic optimisations elide the redundant ones.

          • The design of ARC does nothing to reduce false sharing. Until 64-bit iOS, Apple was storing the refcount in a look-aside table (GNUstep put it in the object header about 20 years before Apple). This meant that you were acquiring a lock on a C++ map structure for each refcount manipulation, which made it incredibly expensive (roughly an order of magnitude more expensive than the GNUstep implementation).

            No, they didn't. There was one byte for the refcount, with 1..127 meaning "real refcount" and 128..255 meaning "(refcount - 192) plus upper bits stored elsewhere". The look-aside table was only used first if the refcount exceeded 127, and then the refcount would be 192 stored in the object, and the rest elsewhere. The next change would happen only if you increased or decreased the ref count by 64 in total. Very, very rare in practice.

      • The argument made here in reference to the paper is that ARC is a useful strategy when considering performance in relation to physical memory size. The throughput/latency trade-off is important for mobile applications, but as you mention, some implementations of a GC can perform well for latency (which is obviously crucial for a mobile app). The enormous performance penalty as the GC heap size approaches the amount of physical memory, however, is a major issue that cannot be easily worked around on a smalle
      • by JTL21 ( 190706 )

        Most of what you say is true but it is missing a huge aspect: memory usage.

        GC implementations trade peak memory usage for processing efficiency. Given that computing cost is in MANY cases memory size based (typical pricing of virtual machines due to the number that can be packed it) advantage shifts back to ARC and lower memory overhead (even if total CPU overhead is higher). Many GC systems require 2x the peak RAM that the application is actually using.

        Any mark and sweep process is also likely to be brutal

        • The only GC mechanism that requires double the memory that you use is a semispace compactor. A lot of modern GCs use this for the young generation (if the space fits in the cache, it's very cheap, especially if you use nontemporal loads / stores when relocating the objects. Some work at Sun Research a decade ago showed that you could do it entirely in hardware in the cache controller very cheaply). Most GCs use mark-and-compact on smaller regions than the entire heap. You're right that you get some cach
  • are modern languages forced to rely on language run-time to compensate for the facilities lacking in modern operating systems?" In other words, have the languages tried to compensate for the fact that there are no new OS-level light-weight paradigms to take advantage of multi-core processors?

    Another way of looking at this (not necessarily the right or wrong way) is to encourage as much as possible userland functionality, instead of putting it in the kernel....for security purposes. Linux allows userland drivers for this reason (SANE scanner drivers are an example of this technique).

    • In other words, have the languages tried to compensate for the fact that there are no new OS-level light-weight paradigms to take advantage of multi-core processors?

      But this isn't true. Apple's Grand Central Dispatch has kernel support in Darwin.

      • Chris mentioned Grand Central Dispatch:

        That said, the line between the OS and libraries is a very blurry one: Grand Central Dispatch (GCD) is a great example, because it is a combination of user space code, kernel primitives, and more all designed together.

        • It'll take more than me making a fool of myself to convince me to RTFA before commenting ;-P

          As I understand it, the kernel support isn't really a game-changer for GCD. Microsoft's TPL machinery seems to get by fine without any such kernel-awareness. The same goes for Intel TBB. Perhaps it starts to matter under particularly heavy loads, I don't know.

          • I don't really understand grand central well enough to really comment on the details of architecture, sorry, you'll have to be satisfied with a less-entertaining conversation from me :(
  • Two comments

    Parallelism -- the problem with parallelism is that everyone assumes that all problems can be decomposed into problems which can be solved in parallel. This is the "I all you have is a hammer, everything looks like a nail" problem. There hasn't been a lot of progress on the P vs. NP front, nor does it look like there's likely to be one soon, short of true quantum computing. And no, D-Wave fans: quantum annealing is not the same thing as collapsing the composite wave for into the correct answe

  • Chris, thanks for the tip about Swift Playground, seems neat! I hope somebody Apple will see fit to create a Visual Basic type developed environment for Swift, or even better, like a HyperCard for iOS, maybe backed by Swift.
    • by Anonymous Coward

      Why don't you stop living in 1987 and recognize that web sites are HyperCard stacks, web pages are HyperCard cards, and JavaScript is your HyperTalk scripting language.

      Why do you think HTML has push buttons and text forms and radio buttons? Where do you think hypertext links came from? Why do you think every web browser renders a pointing finger cursor over hyperlinks? All of these things are copied from HyperCard.

      Have you ever even used HyperCard? Because I have, and the web is better in every way, and

      • Re:BASIC (Score:4, Interesting)

        by swb ( 14022 ) on Monday January 23, 2017 @08:06AM (#53719817)

        That's a bit harsh, isn't it?

        For 1987 HyperCard seemed like a pretty easy way for someone with casual knowledge to produce what amounted to something close to a GUI application without climbing the super steep learning curve involved in writing a native Mac application. I think Inside Macintosh was up to about 5 volumes by then and event-based programming was a bit of mind fuck for people who had come out of general programming creating menu-driven designs, not to mention the headaches of generating GUI interfaces.

        I seem to remember running an NNTP reader on the Mac ~1999 that used Hypercard.

        • Hypercard looked like a winner at first. There were books about it and Hypertalk available, and there was an explosion of stacks (i.e., programs) that did useful if simple things. The first version of Myst was a Hypercard stack, and it shows. Then Apple stopped shipping the ability to write Hypertalk by default, and not that long afterwards removed Hypercard. I never understood why.

      • > Why don't you stop living in 1987 and recognize that web sites are HyperCard stacks, web pages are
        > HyperCard cards, and JavaScript is your HyperTalk scripting language.

        A key component to HC was its pervasive and invisible data storage, of which there is no analog in the basic JS/DOM world. And to compare JS to HT is a bit of a joke, both good and bad.

      • Have you ever even used HyperCard? Because I have, and the web is better in every way, and HyperTalk was buggy crap compared to JavaScript.

        Oh hell no, just getting something centered on the page is a pain on the web. And don't even talk about lining stuff up vertically. [zerobugsan...faster.net] The web is the C++ of the page-layout world: it gives us jobs.

      • HyperCard and Stacks have absolutely nothing to do with web sites etc.
        The analogy is completely flawed.

        I can not download a website, install it on my computer and modify the "stack" with a few mouseclicks to my liking.

        Where do you think hypertext links came from?
        First of all: it does not matter. And secondly: not from HyperCard, that is a myth ;D

        Because I have, and the web is better in every way,
        For the reasons given above: it is not. It is not even the same game, you can't compare them.

        and HyperTalk was b

    • There is a HyperCard like thing for iOS called NovoCard.

      It is from http://plectrum.com/novocard/N... [plectrum.com] available for iPads and the scripting language used is a tweaked JavaScript.

      I made about 30 stacks meanwhile with it, it is not as "perfect" as the original, scripting is a bit quirky IMHO, but the look and feel is very similar.

  • CL: I can't speak to why Itanium failed (I suspect that many non-technical issues like business decisions and schedule impacted it), but VLIW is hardly dead. VLIW designs are actively used in some modern GPUs and is widely used in DSPs - one example supported by LLVM is the Qualcomm Hexagon chip. The major challenge when compiling for a VLIW architecture is that the compiler needs good profile information, so it has an accurate idea of the dynamic behavior of the program.

    VLIW is probably fine for specific purpose CPUs, be it GPUs, DSPs or any other stuff that does not require support of any legacy software. It's a horrible platform for any general purpose CPU, and it's strange that both HP (as owner of Multiflow and Cydrome) and Intel missed an important issue about it: that it can't support compatibility if software is to take advantage of any enhancement. In RISC CPUs, for instance, things like register renaming or branch prediction are done in silicon: in VLIW, the co

    • So what happened to EDGE? Lots of talk about 5 years ago, now nothing.

  • I still hate Christian Lattn-- wait a second.

  • Clang compiles code significantly faster than GCC in many cases, still generates slightly better errors and warning messages than GCC, and is usually a bit ahead in terms of support for the C++ standard.

    Which compiler is better for Plain Old C?

    • by mandolin ( 7248 )

      Which compiler is better for Plain Old C?

      It depends on what you're trying to do.
      As the author of a (crappy) chess engine, if you have a performance-sensitive app, then you just need to build it with both and see which one "wins". In my app's case, clang was a little faster, but YMMV.
      Both compilers will sometimes catch warnings that the other won't, so it's a good practice anyway.

      In terms of warning/error clarity, clang might be slightly better, but gcc's is perfectly fine (especially if you're used to it). In the rare case that you see an obtuse

      • It depends on what you're trying to do.

        I'm currently going through an old book on compilers and interpreters in Borland C, translating Borland C into modern C and learning Pascal at the same time. With Cygwin installed, I have access to gcc and clang. Seems like the gcc error messages are too cryptic for my taste. I'll give clang a try.

  • Not to focus too much on the negative, but I get a kick out of all his hyperlinks to educational videos that require the Safari browser or a proprietary iOS app to view. In general, I can't get over how much Swift as a language, along with its ongoing development, has the whole Apple philosophy and ecosystem in mind.

    I'd also take issue with the idea that Swift (and indeed most languages these days) can fill the void left behind by the death of BASICs. Swift has no GUI layers built-in, and as Lattner point

It is easier to write an incorrect program than understand a correct one.

Working...