Slashdot's Interview With Swift Creator Chris Lattner 85
by Volanin
Since you're the creator of LLVM, I'd like to know, in your opinion what's the greatest advantage of LLVM/Clang over the traditional and established GNU GCC compiler. Also, what's the greatest advantage of GNU GCC (or if you'd prefer, any other compiler) over LLVM/Clang, something that you'd like to "port" someday?
CL: GCC and LLVM/Clang have had a mutually beneficial relationship for years. Clang's early lead in terms of compiler error message expressivity has led the GCC developers to improve their error messages, and obviously GCC provides a very high bar to meet when LLVM and Clang were being brought up. It is important to keep in mind that GCC doesn't have to lose for Clang to win (or visa versa). Also, I haven't used GCC for many years, that said, since you asked I'll try to answer to the best of my knowledge:
From my understanding, for a C or C++ programmer on Linux, the code generated by GCC and Clang is comparable (each win some cases and lose other cases). Both compilers have a similar feature set (e.g. OpenMP is supported by both). Clang compiles code significantly faster than GCC in many cases, still generates slightly better errors and warning messages than GCC, and is usually a bit ahead in terms of support for the C++ standard. That said, the most significant benefit is improved compile time.
Going one level deeper, the most significant benefit I see of the LLVM optimizer and code generator over GCC is its architecture and design. LLVM is built with a modular library-based design, which has allowed it to be used in a variety of ways that we didn't anticipate. For example, it has been used by movie studios to JIT compile and optimize shaders used in special effects, has been used to optimize database queries, and LLVM is used as the code generator for a much wider range of source languages than GCC.
Similarly, the most significant benefit of Clang is that it is also modular. It specifically tackles problems related to IDE integration (including code completion, syntax highlighting, etc) and has a mature and vibrant tooling ecosystem build around it. Clang's design (e.g. lack of pervasive global variables) also allows it to be used at runtime, for example in OpenCL and CUDA implementations.
The greatest advantage I can see of GCC over LLVM/Clang is that it is the default compiler on almost all Linux distributions, and of course Linux is an incredibly important for developers. I'd love to see more Linux distributions start shipping Clang by default. Finally, GCC provides an Ada frontend, and I'm not aware of a supported solution for Ada + LLVM.
Future of LLVM?
by mveloso
Where do you see LLVM going?
CL: There are lots of things going on, driven by the amazing community of developers (which is growing like crazy) driving forward the llvm.org projects. LLVM is now used pervasively across the industry, by companies like Apple, Intel, AMD, Nvidia, Qualcomm, Google, Sony, Facebook, ARM, Microsoft, FreeBSD, and more. I'd love for it to be used more widely on Windows and Linux.
At the same time, I expect to see LLVM to continue to improve in a ton of different ways. For example, ThinLTO is an extremely promising approach that promises to bring scalable link time optimization to everyone, potentially replacing the default for -O3 builds. Similarly, there is work going on to speed up compile times, add support for the Microsoft PDB debug information format, and too many other things to mention here. It is an incredibly exciting time. If you're interested in a taste of what is happening, take a look at the proceedings from the recent 2016 LLVM Developer Meeting.
Finally, the LLVM Project continues to expand. Relatively recent additions include llgo (a Go frontend) and lld (a faster linker than "Gold"), and there are rumors that a Fortran frontend may soon join the fold.
The Mythical Compiler -VLIW
by Anonymous Coward
Is there any hope for VLIW architectures? The general consensus seems to be that Itanium tanked because the compiler technology wasn't able to make the leap needed. Linus complained about the Itanium ISA exposing the pipelines to assembly developers. What are the challenges from a compiler writers perspective with VLIW?
CL: I can't speak to why Itanium failed (I suspect that many non-technical issues like business decisions and schedule impacted it), but VLIW is hardly dead. VLIW designs are actively used in some modern GPUs and is widely used in DSPs - one example supported by LLVM is the Qualcomm Hexagon chip. The major challenge when compiling for a VLIW architecture is that the compiler needs good profile information, so it has an accurate idea of the dynamic behavior of the program.
How much of Swift is Based on Groovy?
by Anonymous Coward
So how much of Swift was inspired by Groovy? Both come from more high-end languages and look and act almost identical.
CL: It is an intentional design point of Swift that it look and feel "familiar" to folks coming from many other languages: not just Groovy. Feeling familiar and eliminating unnecessary differences from other programming languages is a way of reducing barriers of entry to start programming in Swift. It is also clearly true that many other languages syntactically influence each other, so you see convergence of ideas coming from many different places.
That said, I think it is a stretch to say that Swift and Groovy look and act "identical", except in some very narrow cases. The goal of Swift is simply to be as great as possible, it is not trying to imitate some other programming language.
C#
by Anonymous Coward
What do you think about Microsoft and C# versus the merits of Swift?
CL: I have a ton of respect for C#, Rust, and many other languages, and Swift certainly benefited by being able to observe their evolution over time. As such, there are a lot of similarities between these languages, and it isn't an accident.
Comparing languages is difficult in this format, because a lot of the answers come down to "it depends on what you're trying to do", but I'll give it a shot. C# has the obvious benefit of working with the .NET ecosystem, whereas Swift is stronger at working in existing C and Objective-C ecosystems like Cocoa and POSIX.
From a language level, Swift has a more general type system than C# does, offers more advanced value types, protocol extensions, etc. Swift also has advantages in mobile use cases because ARC requires significantly less memory than garbage collected languages for a given workload. On the other hand, C# has a number of cool features that Swift lacks, like async/await, LINQ, etc.
Rust
by Anonymous Coward
Chris, what are your general thoughts about Rust as a programming language?
CL: I'm a huge Rust fan: I think that it is a great language and its community is amazing. Rust has a clear lead over Swift in the system programming space (e.g. for writing kernels and device drivers) and I consider it one of the very few candidates that could lead to displacing C and C++ with a more safe programming language.
That said, Swift has advantages in terms of more developers using it, a more mature IDE story, and offers a much shallower learning curve for new developers. It is also very likely that a future version of Swift will introduce move-only types and a full ownership model, which will make Swift a lot more interesting in the system programming space.
BASIC
by jo7hs2
As someone who has been involved with the development of programming languages, do you think it is still possible to come up with a modern-day replacement for BASIC that can operate in modern GUI environments?
It seems like all attempts since we went GUI (aside from maybe early VisualBASIC and Hypercard) have been too complicated, and all attempts have been platform-specific or abandoned. With the emphasis on coding in schools, it seems like it would be helpful to have a good, simple, introductory language like we had in BASIC.
CL: It's probably a huge shock, but I think Swift could be this language. If you have an iPad, you should try out the free Swift Playgrounds app, which is aimed to teach people about programming, assuming no prior experience. I think it would be great for Swift to expand to provide a VisualBASIC-like scripting solution for GUI apps as well.
Cross-platform
by psergiu
How cross-platform is Swift? Are the GUI libraries platform-dependent or independent? I.E.: Can I write a single Swift program with a GUI that will compile, work the same and look similar on multiple platforms: Linux, Mac OS, Real Unix-es & BSDs, AIX, Windows?
CL: Swift is Open Source, has a vibrant community with hundreds of contributors, and builds on the proven LLVM and Clang technology stack. The Swift community has ported Swift itself to many different platforms beyond Apple platforms: it runs on various Linux distros and work is underway by various people to port it to Android, Windows, various BSDs, and even IBM mainframes.
That said, Swift does not provide a GUI layer, so you need to use native technologies to do so. Swift helps by providing great support for interoperating with existing C code (and will hopefully expand to support C++ and other languages in the future). It is possible for someone to design and build a cross platform GUI layer, but I'm not aware of any serious efforts to do so.
Exception Handling
by andywest
Why did Swift NOT have exception handling in the first couple of versions?
CL: Swift 1 (released in 2014) didn't include an error handling model simply because it wasn't ready in time: it was added in Swift 2 (2015). Swift 2 included a number of great improvements that weren't ready for Swift 1, including protocol extensions. Protocol extensions dramatically expanded the design space of what you can do with Swift, bringing a new era of "protocol oriented programming". That said, even Swift 3 (2016) is still missing a ton of great things we hope to add over the coming years: there is a long and exciting road ahead! See questions below for more details.
Why are strings passed by value?
by Anonymous Coward
Strings are immutable pass-by-reference objects in most modern languages. Why did you make this decision?
CL: Swift uses value semantics for all of its "built in" collections, including String, Array, Dictionary, etc. This provides a number of great advantages by improving efficiency (permitting in-place updating instead of requiring reallocation), eliminating large classes of bugs related to unanticipated sharing (someone mutates your collection when you are using it), and defines away a class of concurrency issues. Strings are perhaps the simplest of any of these cases, but they get the efficiency and other benefits.
If you're interested in more detail, there is a wealth of good information about the benefits of value vs reference types online. One great place to start is the "Building Better Apps with Value Types in Swift" talk from WWDC 2015.
As a language designer
by superwiz
Since you have been involved with 2 lauded languages, you are in a good position to answer the following question: "are modern languages forced to rely on language run-time to compensate for the facilities lacking in modern operating systems?" In other words, have the languages tried to compensate for the fact that there are no new OS-level light-weight paradigms to take advantage of multi-core processors?
CL: I'm not sure exactly what you mean here, but yes: if an OS provides functionality, there is no reason for a language runtime to replicate it, so runtimes really only exist to supplement what the OS provides. That said, the line between the OS and libraries is a very blurry one: Grand Central Dispatch (GCD) is a great example, because it is a combination of user space code, kernel primitives, and more all designed together.
Parallelism
by bill_mcgonigle
Say, about fifteen years ago, there was huge buzz about how languages and compilers were going to take care of the "Moore's Law Problem" by automating the parallelism of every task that could be broken up. With single-static assignment trees and the like the programmer was going to be freed from manually doing the parallelism.
With manufacturers starting to turn out 32- and 64-core chips, I'm wondering how well did we did on that front. I don't see a ton of software automatically not pegging a core on my CPU's. The ones that aren't quite as bad are mostly just doing a fork() in 2017. Did we get anywhere? Are we almost there? Is software just not compiled right now? Did it turn out to be harder than expected? Were languages not up to the task? Is hardware (e.g. memory access architectures) insufficient? Was the possibility oversold in the first place?
CL: I can't provide a global answer, but clearly parallelism is being well utilized in some "easy" cases (e.g. speeding up build time of large software, speed of 3d rendering, etc). Also, while large machines are available, most computers are only running machines with 1-4 cores (e.g. mobile phones and laptops), which means that most software doesn't have to cope with the prospects of 32-core machines⦠yet.
Looking forward, I am skeptical of the promises of overly magic solutions like compiler auto-parallelization of single threaded code. These "heroic" approaches can work on specific benchmarks and other narrow situations, but don't lead to a predictable and reliable programmer model. For example, a good result would be for you to use one of these systems and get a 32x speedup on your codebase. A really bad result would be to then make a minor change or bug fix to your code, and find out that it caused a 32x slowdown by defeating the prior compiler optimization. Magic solutions are problematic because they don't provide programmer control.
As such, my preferred approach is for the programming language to provide expressive ways for the programmer to describe their parallelism, and then allow the compiler and runtime to efficiently optimize it. Things like actor models, OpenMP, C++ AMP, and goroutines seem like the best approach. I expect concurrency to be an active area of development in the Swift language, and hope that the first pieces of the story will land in Swift 5 (2018).
Any insight into language design choices?
by EMB Numbers
I am a 25+ year Objective-C programmer and among other topics, I teach "Mobile App Development" and "Comparative Languages" at a university. I confess to being perplexed by some Swift language design decisions. For example,
- Swift is intended to be a "Systems Programming Language", is it not? Yet, there is no support for "volatile" variables needed to support fundamental "system" features like direct memory access from peripheral hardware.
- Why not support "dynamic runtime features" like the ones provided by the Objective-C language and runtime? It's partly a trick question because Swift is remarkably "dynamic" through use of closures and other features, but why not go "all the way?"
CL: These two questions get to the root of Swift "current and future". In short, I would NOT say that Swift is an extremely compelling systems programming or scripting language today, but it does aspire to be great for this in the years ahead. Recall that Swift is only a bit over two years old at this point: Swift 1.0 was released in September 2014.
If you compare Swift to other popular contemporary programming languages (e.g. Python, Java/C#, C/C++, Javascript, Objective-C, etc) a major difference is that Swift was designed for generality: these languages were initially designed for a specific niche and use case and then organically growing out.
In contrast, Swift was designed from the start to eventually span the gamut from scripting language to systems programming, and its underlying design choices anticipate the day when all the pieces come together. This is no small feat, because it requires pulling together the strengths of each of these languages into a single coherent whole, while balancing out the tradeoffs forced by each of them.
For example, if you compare Swift 3 to scripting languages like Python, Perl, and Ruby, I think that Swift is already as expressive, teachable, and easy to learn as a scripting language, and it includes a REPL and support for #! scripts. That said, there are obvious missing holes, like no regular expression literals, no multi-line string literals, and poor support for functionality like command line argument processing - Swift needs to be more "batteries included".
If you compare Swift 3 to systems programming languages with C/C++ or Rust, then I think there is a lot to like: Swift provides full low-level memory control through its "unsafe" APIs (e.g. you can directly call malloc and free with no overhead from Swift, if that is what you want to do). Swift also has a much lighter weight runtime than many other high level languages (e.g. no tracing Garbage Collector or threading runtime is required). That said, it has a number of holes in the story: no support for memory mapped I/O, no support for inline assembly, etc. More significantly, getting low-level control of memory requires dropping down to the Unsafe APIs, which provide a very C/C++ level of control, but also provides the C/C++ level lack of memory safety. I'm optimistic that the ongoing work to bring an ownership model to Swift will provide the kinds of safety and performance that Rust offers in this space.
If you compare Swift 3 to general application level languages like Java, I think it is already pretty great (and has been proven by its rapid adoption in the iOS ecosystem). The server space has very similar needs to general app development, but both would really benefit from a concurrency model (which I expect to be an important focus of Swift 5).
Beyond these areas of investment there is no shortage of ideas for other things to add over time. For example, the dynamic reflection capabilities you mention need to be baked out, and many people are interested in things like pre/post-conditions, language support for typestate, further improvements to the generics system, improved submodules/namespacing, a hygienic macro system, tail calls, and so much more.
There is a long and exciting road ahead for Swift development, and one of the most important features was a key part of Swift 3: unlike in the past, we expect Swift to be extremely source compatible going forward. These new capabilities should be addable to the language and runtime without breaking code. If you are interested in following along or getting involved with Swift development, I highly encourage you to check out the swift-evolution mailing list and project on GitHub.
Any hope for more productive programming?
by Kohath
I work in the semiconductor industry and our ASIC designs have seen a few large jumps in productivity:
- Transistors and custom layouts transitioned to standard cell flows and automated P&R.
- Design using logic blocks transitioned to synthesized design using RTL with HDLs.
- Most recently, we are synthesizing circuits directly from C language.
In the same timeframe, programming has remained more or less the same as it always was. New languages offer only incremental productivity improvements, and most of the big problems from 10 or 20 years ago remain big problems.
Do you know of any initiatives that could produce a step-function increase (say 5-10x) in coding productivity for average engineers?
CL: There have been a number of attempts to make a "big leap" in programmer productivity over the years, including visual programming languages, fourth-generation" programming languages, and others. That said, in terms of broad impact on the industry, none of these have been as successful as the widespread rise of "package" ecosystems (like Perl's CPAN, Ruby Gems, NPM in Javascript, and many others), which allow rapid reuse of other people's code. When I compare the productivity of a modern software developer using these systems, I think it is easy to see a 10x improvement in coding productivity, compared to a C or C++ programmer 10 years ago.
Swift embraces this direction with its builtin package manager "SwiftPM". Just as the Swift language learns from other languages, SwiftPM is designed with an awareness of other package ecosystems and attempts to assemble their best ideas into a single coherent vision. SwiftPM also provides a portable build system, allowing development and deployment of cross-platform Swift packages. SwiftPM is still comparatively early on in design, but has some heavy hitters behind it, particularly those involved in the various Swift for the Server initiatives. You might also be interested in the online package catalog hosted by IBM.
Looking ahead, even though a bit cliche, I'd have to say that machine learning techniques (convolutional neural nets and deep neural nets for example) really are changing the world by making formerly impossible things merely "hard". While it currently seems that you need a team of Ph.D's to apply and develop these techniques, when they become better understood and developed, I expect them to be more widely accessible to the rest of us. Another really interesting recent development is the idea of "word vectors," which is a pretty cool area to geek out on.
C# vs Swift (Score:5, Interesting)
From a language level, Swift has a more general type system than C# does, offers more advanced value types, protocol extensions, etc. Swift also has advantages in mobile use cases because ARC requires significantly less memory than garbage collected languages for a given workload.
I feel like this should be quoted any time a C# programmer comes along thinking they have the perfect language (unaware of what else is out there). C# is great, but it's not the greatest possible language.
Re: (Score:2)
Re:C# vs Swift (Score:4, Funny)
I only need C++. I can reduce every problem to being a nail.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
There is RemObject Elements [elementscompiler.com]. It compiles C#/Swift/Delphi and soon also Java to Net/IOS/Java/MacOS/Windows/Linux executables. The windows/linux compiling was is new. Is really cool stuff.
Re:C# vs Swift (Score:5, Insightful)
I'm not convinced by Chris' argument here. GC is an abstract policy (objects go away after they become unreachable), ARC is a policy (GC for acyclic data structures, deterministic destruction when the object is no longer reachable) combined with a mechanism (per object refcounts, refcount manipulation on every update). There is a huge design space for mechanisms that implement the GC policy and they all trade throughput and latency in different ways. It would be entirely possible to implement the C# GC requirements using ARC combined with either a cycle detector or a full mark-and-sweep-like mechanism for collecting cycles. If you used a programming style without cyclic data structures then you'd end up with almost identical performance for both.
Most mainstream GC implementations favour throughput over latency. In ARC, you're doing (at least) an atomic op for every heap pointer assignment. In parallel code, this leads to false sharing (two threads updating references to point to the same object will contend on the reference count, even if they're only reading the object and could otherwise have it in the shared state in their caches). There is a small cost with each operation, but it's deterministic and it doesn't add much to latency (until you're in a bit of code that removes the last reference to a huge object graph and then has to pause while they're all collected - one of the key innovations of OpenStep was the autorelease pool, which meant that this kind of deallocation almost always happens in between runloop iteration). A number of other GC mechanisms are tuned for latency, to the extent that they can be used in hard realtime systems with a few constraints on data structure design (but fewer than if you're doing manual memory management).
This is, unfortunately, a common misconception regarding GC: that it implies a specific implementation choice. The first GC papers were published almost 60 years ago and it's been an active research area ever since, filling up the design space with vastly different approaches.
Garbage Collection (Score:5, Interesting)
There were experiments done in the 1970s for Lisp systems that showed that ARC was generally the slowest garbage collection algorithm, despite what C++ programmers think. You pay for every pointer move, rather than just once at GC. And the 1980s generational systems were even better.
I do not know Swift, but C compatibility is blown out of the water when a GC can actually move objects in memory, as it should.
Re: (Score:1)
For some real time needs, it is more important to have the "payment" spread all over in a determinstic way rather than appearing all at once at a random point in time.
Re: (Score:2, Informative)
This just moves the problem, because now you need to know when to call GC before your memory floods (which once again, is more likely to happen in some real time environments)
Re: (Score:1)
If you can't figure out when is an appropriate time to call GC in a realtime system, then you and your compiler are not appropriate for real time systems. If your memory fills up because your real time constraints didn't give you time to clean it up, then potentially no memory management system would work and your system has a more fundamental problem. Real time just adds more constraints, and no magic will fix an overconstrained problem.
Re: (Score:3)
If so then use a real-time GC algorithm.
Re:Garbage Collection (Score:5, Informative)
If there were experiments for LISP in the 70s it'll have very little to say about what an optimising compiler does now. Indeed much of Swift's ARC doesn't involve any actual reference counting at run time, as static analysis has already determined when many objects can be deleted.
Re: (Score:2)
No they didn't. They had pretty much no optimisations at all. You have no idea how limited the resources were back then.
Re: (Score:3)
Who says C++ programmers think ARC is fastest? The fastest is automatic stack based memory management, what with being free and all, and that's what's probably used most of all in C++.
Re: (Score:2)
Re: (Score:2)
Good point, and C#/Java programmers rarely use that for objects.
As far as I know you can't in those languages. The optimizing JIT does escape analysis which attempts it where possible, but it's never going to be as effective as C++ in that regard.
Re: (Score:1)
There's some more issues to it than that.
C++ is more efficent because it can use the stack more and can store primitive types in stl collections directly.
Language's like c# and java have to instantiate objects on the heap always and primitive type can't be store in collection classes.
Then, look at the amount of memory that the runtime uses, it's far, far more than any c++ exe. For instance, run a empty c# Unity project and you've blown 50mb of RAM already.
Re: (Score:2)
You pay for every pointer move........when a GC can actually move objects in memory
That's not actually a common need
Re: (Score:2)
I can't speak for the experiments you're referring to specifically, but I've seen studies make the mistake of comparing a 1:1 ratio of std::shared_ptr objects to garbage collected references in other languages. I hate these comparisons because it overlooks the fact that garbage collected languages are so enamored by their own GC firepower that they insist on creating garbage all over the place. Outside of the base primitives, pretty much everything becomes an object that is individually tracked on the heap.
Re: (Score:2)
C++ takes a more general approach. RAII with smart pointers can and should be used to manage any resource, including memory, database connections, files, whatever. This is not normally the best approach for memory, and reference counts have problems when you get into multithreading and caching, since they force another memory reference, and possibly a memory write, somewhere other than the actual variable. Like most things in language design, it's a tradeoff, and most modern languages have chosen more s
Re: (Score:2)
The problem with GC is that it's inherently lazy.
Take straight C for example. You need to define a variable, initialize it, do whatever with it, and then free it, within the scope of the function in order to make efficient use of it. Or you use malloc.
In Perl, PHP, Javascript, and most interpreted languages, you simply define the variable, some people remember to initialize it, do whatever with it, and then let it go out of scope for it to be garbage collected. If you do this frequently enough, like inside a tight loop, then the GC introduces latency.
In C++ you can specifically tell the C++ runtime to delete objects, and use C style varibles if you want the tighter control, or stick entirely with malloc if you want to use as little memory as possible.
The goal with GC should be determined by the nature of the device. A desktop system with a lot of memory will have no problem deferring garbage collection, but then you get sites like Twitter, which endlessly "grow" the DOM and never actually GC anything until the tab is refreshed. Before Chrome finally released a 64bit version, one would only get about 2 days out of a twitter tab before it would crash. Do this on mobile and it will crash hourly, because even though the mobile device may have 1GB of memory and run in 64bit mode, it never actually "stops" running things in the background, they are just paused, and only unloaded when memory is needed. A headless device that needs to run in a wiring closet without being reset for months or years, needs to be able to detect when memory is failing to be freed otherwise the device may stop working.
I have an example of this with a Startech IP-KVM which runs linux, but because Startech doesn't release updates for the things they put their brand on after the warranty expires, this IP-KVM remains in a useless state (due to it running a version of VNC and the SSL part only working over Java) and needs to be power cycled by the remote-PDU before it can be used. The device just runs out of memory from DoS-like activity and it overwhelms the logging processes.
And that's sloppy programming. There is such a thing as GC-friendly coding conventions. A GC is not supposed to exist for programmers to go nilly-wily "someone is going to clean my butt for me".
Re: (Score:1)
You make some good points, but it's worth noting that:
1. While you're completely correct in theory, if all of the mainstream implementations currently work in the assumed fashion then his point is still reasonably valid.
2. ARC is designed to minimise the refcount editing / false sharing. How well this goes in practice depends on the static analysis capabilities of the compiler; it will never be perfect but with a good compiler and a decent programmer it can probably be very good. It's certainly much better
Re:C# vs Swift (Score:4, Interesting)
1. While you're completely correct in theory, if all of the mainstream implementations currently work in the assumed fashion then his point is still reasonably valid.
The mainstream implementations optimise for a particular point in the tradeoff space because they're mainstream (i.e. intended to be used in that space). GCs designed for mobile and embedded systems work differently.
2. ARC is designed to minimise the refcount editing / false sharing
I've worked on the ARC optimisations a little bit. They're very primitive and the design means that you often can't elide the count manipulations. The optimisations are really there to simplify the front end, not to make the code better. When compiling Objective-C, clang is free to emit a retain for the object that's being stored and a release for the object that's being replaced. It doesn't need to try to be efficient, because the optimisers have knowledge of data and control flow that the front end lacks and so it makes sense to emit redundant operations in the front end and delete them in the middle. Swift does this subtly differently by having its own high-level IR that tracks dataflow, so the front end can feed cleaner IR into LLVM.
The design of ARC does nothing to reduce false sharing. Until 64-bit iOS, Apple was storing the refcount in a look-aside table (GNUstep put it in the object header about 20 years before Apple). This meant that you were acquiring a lock on a C++ map structure for each refcount manipulation, which made it incredibly expensive (roughly an order of magnitude more expensive than the GNUstep implementation).
Oh, and making the same optimisations work with C++ shared_ptr would be pretty straightforward, but it's mostly not needed because the refcount manipulations are inlined and simple arithmetic optimisations elide the redundant ones.
Re: (Score:2)
The design of ARC does nothing to reduce false sharing. Until 64-bit iOS, Apple was storing the refcount in a look-aside table (GNUstep put it in the object header about 20 years before Apple). This meant that you were acquiring a lock on a C++ map structure for each refcount manipulation, which made it incredibly expensive (roughly an order of magnitude more expensive than the GNUstep implementation).
No, they didn't. There was one byte for the refcount, with 1..127 meaning "real refcount" and 128..255 meaning "(refcount - 192) plus upper bits stored elsewhere". The look-aside table was only used first if the refcount exceeded 127, and then the refcount would be 192 stored in the object, and the rest elsewhere. The next change would happen only if you increased or decreased the ref count by 64 in total. Very, very rare in practice.
Re: (Score:3)
Re: (Score:2)
Most of what you say is true but it is missing a huge aspect: memory usage.
GC implementations trade peak memory usage for processing efficiency. Given that computing cost is in MANY cases memory size based (typical pricing of virtual machines due to the number that can be packed it) advantage shifts back to ARC and lower memory overhead (even if total CPU overhead is higher). Many GC systems require 2x the peak RAM that the application is actually using.
Any mark and sweep process is also likely to be brutal
Re: (Score:2)
Re: (Score:2)
Kernel vs Userland (Score:2)
are modern languages forced to rely on language run-time to compensate for the facilities lacking in modern operating systems?" In other words, have the languages tried to compensate for the fact that there are no new OS-level light-weight paradigms to take advantage of multi-core processors?
Another way of looking at this (not necessarily the right or wrong way) is to encourage as much as possible userland functionality, instead of putting it in the kernel....for security purposes. Linux allows userland drivers for this reason (SANE scanner drivers are an example of this technique).
Re: (Score:3)
In other words, have the languages tried to compensate for the fact that there are no new OS-level light-weight paradigms to take advantage of multi-core processors?
But this isn't true. Apple's Grand Central Dispatch has kernel support in Darwin.
Re: (Score:2)
That said, the line between the OS and libraries is a very blurry one: Grand Central Dispatch (GCD) is a great example, because it is a combination of user space code, kernel primitives, and more all designed together.
Re: (Score:2)
It'll take more than me making a fool of myself to convince me to RTFA before commenting ;-P
As I understand it, the kernel support isn't really a game-changer for GCD. Microsoft's TPL machinery seems to get by fine without any such kernel-awareness. The same goes for Intel TBB. Perhaps it starts to matter under particularly heavy loads, I don't know.
Re: (Score:2)
Two comments (Score:2)
Two comments
Parallelism -- the problem with parallelism is that everyone assumes that all problems can be decomposed into problems which can be solved in parallel. This is the "I all you have is a hammer, everything looks like a nail" problem. There hasn't been a lot of progress on the P vs. NP front, nor does it look like there's likely to be one soon, short of true quantum computing. And no, D-Wave fans: quantum annealing is not the same thing as collapsing the composite wave for into the correct answe
BASIC (Score:2)
Re: (Score:1)
Why don't you stop living in 1987 and recognize that web sites are HyperCard stacks, web pages are HyperCard cards, and JavaScript is your HyperTalk scripting language.
Why do you think HTML has push buttons and text forms and radio buttons? Where do you think hypertext links came from? Why do you think every web browser renders a pointing finger cursor over hyperlinks? All of these things are copied from HyperCard.
Have you ever even used HyperCard? Because I have, and the web is better in every way, and
Re:BASIC (Score:4, Interesting)
That's a bit harsh, isn't it?
For 1987 HyperCard seemed like a pretty easy way for someone with casual knowledge to produce what amounted to something close to a GUI application without climbing the super steep learning curve involved in writing a native Mac application. I think Inside Macintosh was up to about 5 volumes by then and event-based programming was a bit of mind fuck for people who had come out of general programming creating menu-driven designs, not to mention the headaches of generating GUI interfaces.
I seem to remember running an NNTP reader on the Mac ~1999 that used Hypercard.
Re: (Score:2)
Hypercard looked like a winner at first. There were books about it and Hypertalk available, and there was an explosion of stacks (i.e., programs) that did useful if simple things. The first version of Myst was a Hypercard stack, and it shows. Then Apple stopped shipping the ability to write Hypertalk by default, and not that long afterwards removed Hypercard. I never understood why.
Re: (Score:2)
> Why don't you stop living in 1987 and recognize that web sites are HyperCard stacks, web pages are
> HyperCard cards, and JavaScript is your HyperTalk scripting language.
A key component to HC was its pervasive and invisible data storage, of which there is no analog in the basic JS/DOM world. And to compare JS to HT is a bit of a joke, both good and bad.
Re: (Score:2)
Have you ever even used HyperCard? Because I have, and the web is better in every way, and HyperTalk was buggy crap compared to JavaScript.
Oh hell no, just getting something centered on the page is a pain on the web. And don't even talk about lining stuff up vertically. [zerobugsan...faster.net] The web is the C++ of the page-layout world: it gives us jobs.
Re: (Score:2)
HyperCard and Stacks have absolutely nothing to do with web sites etc.
The analogy is completely flawed.
I can not download a website, install it on my computer and modify the "stack" with a few mouseclicks to my liking.
Where do you think hypertext links came from? ;D
First of all: it does not matter. And secondly: not from HyperCard, that is a myth
Because I have, and the web is better in every way,
For the reasons given above: it is not. It is not even the same game, you can't compare them.
and HyperTalk was b
Re: (Score:2)
There is a HyperCard like thing for iOS called NovoCard.
It is from http://plectrum.com/novocard/N... [plectrum.com] available for iPads and the scripting language used is a tweaked JavaScript.
I made about 30 stacks meanwhile with it, it is not as "perfect" as the original, scripting is a bit quirky IMHO, but the look and feel is very similar.
VLIW (Score:2)
CL: I can't speak to why Itanium failed (I suspect that many non-technical issues like business decisions and schedule impacted it), but VLIW is hardly dead. VLIW designs are actively used in some modern GPUs and is widely used in DSPs - one example supported by LLVM is the Qualcomm Hexagon chip. The major challenge when compiling for a VLIW architecture is that the compiler needs good profile information, so it has an accurate idea of the dynamic behavior of the program.
VLIW is probably fine for specific purpose CPUs, be it GPUs, DSPs or any other stuff that does not require support of any legacy software. It's a horrible platform for any general purpose CPU, and it's strange that both HP (as owner of Multiflow and Cydrome) and Intel missed an important issue about it: that it can't support compatibility if software is to take advantage of any enhancement. In RISC CPUs, for instance, things like register renaming or branch prediction are done in silicon: in VLIW, the co
Re: (Score:2)
So what happened to EDGE? Lots of talk about 5 years ago, now nothing.
Re: (Score:3)
You are missing the point about the distinctions b/w CISC, RISC and VLIW. CISC had all of the complexity in silicon, like microcode, variable opcode lengths and instruction types, all w/ the goal of minimizing memory consumption. RISC pushed some of the complexity off-chip, like depending less on assembly coding and more on higher level languages like C, and then letting the silicon service the instructions in the most optimal way, using techniques like branch prediction, register renaming and so on. VLI
Re: (Score:2)
CISC ... all w/ the goal of minimizing memory consumption
That s incorrect.
The goal of CISC is to minimize memory bandwidth, not memory consumption.
Look at the strcpy function, mainframes have CISC instructions where one instruction moves as much data as you want: that means you access memory exactly ones for an opcode and then multiple times for the data.
A bit more complex example is an strcpy implementation in 68k code, just two instructions one to move the data and one to decide and perform a branch back:
Re: (Score:2)
I kept seeing RISC as Regular Instruction Set Computer, attempting to remove special cases and dumping them on the compiler. Having programmed Z80s, I have a good idea of the opposite.
I Still Hate ... (Score:2)
I still hate Christian Lattn-- wait a second.
Best compiler for Plain Old C... (Score:2)
Clang compiles code significantly faster than GCC in many cases, still generates slightly better errors and warning messages than GCC, and is usually a bit ahead in terms of support for the C++ standard.
Which compiler is better for Plain Old C?
Re: (Score:2)
Which compiler is better for Plain Old C?
It depends on what you're trying to do.
As the author of a (crappy) chess engine, if you have a performance-sensitive app, then you just need to build it with both and see which one "wins". In my app's case, clang was a little faster, but YMMV.
Both compilers will sometimes catch warnings that the other won't, so it's a good practice anyway.
In terms of warning/error clarity, clang might be slightly better, but gcc's is perfectly fine (especially if you're used to it). In the rare case that you see an obtuse
Re: (Score:2)
It depends on what you're trying to do.
I'm currently going through an old book on compilers and interpreters in Borland C, translating Borland C into modern C and learning Pascal at the same time. With Cygwin installed, I have access to gcc and clang. Seems like the gcc error messages are too cryptic for my taste. I'll give clang a try.
Learning languages (Score:2)
Not to focus too much on the negative, but I get a kick out of all his hyperlinks to educational videos that require the Safari browser or a proprietary iOS app to view. In general, I can't get over how much Swift as a language, along with its ongoing development, has the whole Apple philosophy and ecosystem in mind.
I'd also take issue with the idea that Swift (and indeed most languages these days) can fill the void left behind by the death of BASICs. Swift has no GUI layers built-in, and as Lattner point