×
Programming

Are Software Designers Ignoring The Needs of the Elderly? (vortex.com) 205

"[A]t the very time that it's become increasingly difficult for anyone to conduct their day to day lives without using the Net, some categories of people are increasingly being treated badly by many software designers," argues long-time Slashdot reader Lauren Weinstein:
The victims of these attitudes include various special needs groups — visually and/or motor impaired are just two examples — but the elderly are a particular target. Working routinely with extremely elderly persons who are very active Internet users (including in their upper 90s!), I'm particularly sensitive to the difficulties that they face keeping their Net lifelines going. Often they're working on very old computers, without the resources (financial or human) to permit them to upgrade. They may still be running very old, admittedly risky OS versions and old browsers — Windows 7 is going to be used by many for years to come, despite hitting its official "end of life" for updates a few days ago.

Yet these elderly users are increasingly dependent on the Net to pay bills (more and more firms are making alternatives increasingly difficult and in some cases expensive), to stay in touch with friends and loved ones, and for many of the other routine purposes for which all of us now routinely depend on these technologies....

There's an aspect of this that is even worse. It's attitudes! It's the attitudes of many software designers that suggest they apparently really don't care about this class of users much — or at all. They design interfaces that are difficult for these users to navigate. Or in extreme cases, they simply drop support for many of these users entirely, by eliminating functionality that permits their old systems and old browsers to function.

He cites the example of Discourse, the open source internet forum software, which recently announced they'd stop supporting Internet Explorer. Weinstein himself hates Microsoft's browser, "Yet what of the users who don't understand how to upgrade? Who don't have anyone to help them upgrade? Are we to tell them that they matter not at all?"

So he confronted Stack Exchange co-founder Jeff Atwood (who is also one of the co-founders of Discourse) on Twitter — and eventually found himself blocked.

"Far more important though than this particular case is the attitude being expressed by so many in the software community, an attitude that suggests that many highly capable software engineers don't really appreciate these users and the kinds of problems that many of these users may have, that can prevent them from making even relatively simple changes or upgrades to their systems — which they need to keep using as much as anyone — in the real world."
Programming

Introducing JetBrains Mono, 'A Typeface for Developers' (jetbrains.com) 73

Long-time Slashdot reader destinyland writes:
JetBrains (which makes IDEs and other tools for developers and project managers) just open sourced a new "typeface for developers."

JetBrains Mono offers taller lowercase letters while keeping all letters "simple and free from unnecessary details... The easier the forms, the faster the eye perceives them and the less effort the brain needs to process them." There's a dot inside zeroes (but not in O's), and distinguishing marks have also been added to the lowercase L (to distinguish it from both 1's and a capital I). Even the shape of the comma has been made more angular so it's easier to distinguish from a period.

"The shape of ovals approaches that of rectangular symbols. This makes the whole pattern of the text more clear-cut," explains the font's web site. "The outer sides of ovals ensure there are no additional obstacles for your eyes as they scan the text vertically."

And one optional feature even lets you merge multi-character ligatures like -> and ++ into their corresponding symbol. (138 code-specific ligatures are included with the font.)

Open Source

What Linus Torvalds Gets Wrong About ZFS (arstechnica.com) 279

Ars Technica recently ran a rebuttal by author, podcaster, coder, and "mercenary sysadmin" Jim Salter to some comments Linus Torvalds made last week about ZFS.

While it's reasonable for Torvalds to oppose integrating the CDDL-licensed ZFS into the kernel, Salter argues, he believes Torvalds' characterization of the filesystem was "inaccurate and damaging."
Torvalds dips into his own impressions of ZFS itself, both as a project and a filesystem. This is where things go badly off the rails, as Torvalds states, "Don't use ZFS. It's that simple. It was always more of a buzzword than anything else, I feel... [the] benchmarks I've seen do not make ZFS look all that great. And as far as I can tell, it has no real maintenance behind it any more..."

This jaw-dropping statement makes me wonder whether Torvalds has ever actually used or seriously investigated ZFS. Keep in mind, he's not merely making this statement about ZFS now, he's making it about ZFS for the last 15 years -- and is relegating everything from atomic snapshots to rapid replication to on-disk compression to per-block checksumming to automatic data repair and more to the status of "just buzzwords."

[The 2,300-word article goes on to describe ZFS features like per-block checksumming, automatic data repair, rapid replication and atomic snapshots -- as well as "performance wins" including its Adaptive Replacement caching algorithm and its inline compression (which allows datasets to be live-compressed with algorithms.]

The TL;DR here is that it's not really accurate to make blanket statements about ZFS performance, absent a very particular, well-understood workload to measure that performance on. But more importantly, quibbling about the fastest possible benchmark rather loses the main point of ZFS. This filesystem is meant to provide an eminently scalable filesystem that's extremely resistant to data loss; those are points Torvalds notably never so much as touches on....

Meanwhile, OpenZFS is actively consumed, developed, and in some cases commercially supported by organizations ranging from the Lawrence Livermore National Laboratory (where OpenZFS is the underpinning of some of the world's largest supercomputers) through Datto, Delphix, Joyent, ixSystems, Proxmox, Canonical, and more...

It's possible to not have a personal need for ZFS. But to write it off as "more of a buzzword than anything else" seems to expose massive ignorance on the subject... Torvalds' status within the Linux community grants his words an impact that can be entirely out of proportion to Torvalds' own knowledge of a given topic -- and this was clearly one of those topics.

Google

Red Hat and IBM Jointly File Another Amicus Brief In Google v. Oracle, Arguing APIs Are Not Copyrightable (redhat.com) 42

Monday Red Hat and IBM jointly filed their own amicus brief with the U.S. Supreme Court in the "Google vs. Oracle" case, arguing that APIs cannot be copyrighted.

"That simple, yet powerful principle has been a cornerstone of technological and economic growth for over sixty years. When published (as has been common industry practice for over three decades) or lawfully reverse engineered, they have spurred innovation through competition, increased productivity and economic efficiency, and connected the world in a way that has benefited commercial enterprises and consumers alike."

An anonymous reader quotes Red Hat's announcement of the brief: "The Federal Circuit's unduly narrow construction of 17 U.S.C. 102(b) is harmful to progress, competition, and innovation in the field of software development," Red Hat stated in the brief. "IBM and Red Hat urge the Court to reverse the decision below on the basis that 17 U.S.C. 102(b) excludes software interfaces from copyright protection...."

The lower court incorrectly extended copyright protection to software interfaces. If left uncorrected, the lower court rulings could harm software compatibility and interoperability and have a chilling effect on the innovation represented by the open source community... Red Hat's significant involvement with Java development over the last 20 years has included extensive contributions to OpenJDK, an open source implementation of the Java platform, and the development of Red Hat Middleware, a suite of Java-based middleware solutions to build, integrate, automate and deploy enterprise applications. As an open source leader, Red Hat has a stake in the consistent and correct determination of the scope of copyright protection that applies to interfaces of computer programs, including the Java platform interface at stake in this case.

Open source software development relies on the availability of and unencumbered access to software interfaces, including products that are compatible with or interoperate with other computer products, platforms, and services...

Stats

Slate Announces List of The 30 Most Evil Tech Companies (slate.com) 163

An anonymous reader quotes Slate:
Separating out the meaningful threats from the noise is hard. Is Facebook really the danger to democracy it looks like? Is Uber really worse than the system it replaced? Isn't Amazon's same-day delivery worth it? Which harms are real and which are hypothetical? Has the techlash gotten it right? And which of these companies is really the worst? Which ones might be, well, evil?

We don't mean evil in the mustache-twirling, burn-the-world-from-a-secret-lair sense -- well, we mostly don't mean that -- but rather in the way Googlers once swore to avoid mission drift, respect their users, and spurn short-term profiteering, even though the company now regularly faces scandals in which it has violated its users' or workers' trust. We mean ills that outweigh conveniences. We mean temptations and poison pills and unanticipated outcomes.

Slate sent ballots to "a wide range of journalists, scholars, advocates, and others who have been thinking critically about technology for years," and reported that while America's big tech companies topped the list, "our respondents are deeply concerned about foreign companies dabbling in surveillance and A.I., as well as the domestic gunners that power the data-broker business."

But while there were some disagreements, Palantir still rose to #4 on the list because "almost everyone distrusts Peter Thiel."

Interestingly, their list ranks SpaceX at #17 (for potentially disrupting astronomy by clogging the sky with satellites) and ranks Tesla at #14 for "its troubled record of worker safety and its dubious claims that it will soon offer 'full self-driving' to customers who have already paid $7,000 for the promised add-on... Our respondents say the very real social good that Tesla has done by creating safe, zero-emission vehicles does not justify misdeeds, like apparent 'stealth recalls' of defects that appear to violate safety laws or the 19 unresolved Clean Air Act violations at its paint shop."

Slate's article includes its comprehensive list of the 30 most dangerous tech companies. But here's the top 10:
  1. Amazon
  2. Facebook
  3. Alphabet
  4. Palantir Technologies
  5. Uber
  6. Apple
  7. Microsoft
  8. Twitter
  9. ByteDance
  10. Exxon Mobil

There's also lots of familiar names higher up on the list, including both 8chan (#20) and Cloudflare (#21). 23andMe came in at #18, while Huawei was #11. Netflix does not appear anywhere on the list, but Disney ranks #15.

And Oracle was #19. "It takes a lot to make me feel like Google is being victimized by a bully," wrote Cory Doctorow, "but Oracle managed it."


Math

Major Breakthrough In Quantum Computing Shows That MIP* = RE (arxiv.org) 28

Slashdot reader JoshuaZ writes:
In a major breakthrough in quantum computing it was shown that MIP* equals RE. MIP* is the set of problems that can be efficiently demonstrated to a classical computer interacting with multiple quantum computers with any amount of shared entanglement between the quantum computers. RE is the set of problems which are recursive; this is essentially all problems which can be computed.

This result comes through years of deep development of understanding interactive protocols, where one entity, a verifier, has much less computing power than another set of entities, provers, who wish to convince the verifier of the truth of a claim. In 1990, a major result was that a classical computer with a polynomial amount of time could be convince of any claim in PSPACE by interacting with an arbitrarily powerful classical computer. Here PSPACE is the set of problems solvable by a classical computer with a polynomial amount of space. Subsequent results showed that if one allowed a verifier able to interact with multiple provers, the verifier could be convinced of a solution of any problem in NEXPTIME, a class conjectured to be much larger than PSPACE. For a while, it was believed that in the quantum case, the set of problems might actually be smaller, since multiple quantum computers might be able to use their shared entangled qubits to "cheat" the verifier. However, this has turned out not just to not be the case, but the exact opposite: MIP* is not only large, it is about as large as a computable class can naturally be.

This result while a very big deal from a theoretical standpoint is unlikely to have any immediate applications since it supposes quantum computers with arbitrarily large amounts of computational power and infinite amounts of entanglement.

The paper in question is a 165 tour de force which includes incidentally showing that the The Connes embedding conjecture, a 50 year old major conjecture from the theory of operator algebras, is false.

Security

Researchers Find Serious Flaws In WordPress Plugins Used On 400K Sites (arstechnica.com) 11

An anonymous reader quotes a report from Ars Technica: Serious vulnerabilities have recently come to light in three WordPress plugins that have been installed on a combined 400,000 websites, researchers said. InfiniteWP, WP Time Capsule, and WP Database Reset are all affected. The highest-impact flaw is an authentication bypass vulnerability in the InfiniteWP Client, a plugin installed on more than 300,000 websites. It allows administrators to manage multiple websites from a single server. The flaw lets anyone log in to an administrative account with no credentials at all. From there, attackers can delete contents, add new accounts, and carry out a wide range of other malicious tasks.

The critical flaw in WP Time Capsule also leads to an authentication bypass that allows unauthenticated attackers to log in as an administrator. WP Time Capsule, which runs on about 20,000 sites, is designed to make backing up website data easier. By including a string in a POST request, attackers can obtain a list of all administrative accounts and automatically log in to the first one. The bug has been fixed in version 1.21.16. Sites running earlier versions should update right away. Web security firm WebARX has more details.

The last vulnerable plugin is WP Database Reset, which is installed on about 80,000 sites. One flaw allows any unauthenticated person to reset any table in the database to its original WordPress state. The bug is caused by reset functions that aren't secured by the standard capability checks or security nonces. Exploits can result in the complete loss of data or a site reset to the default WordPress settings. A second security flaw in WP Database Reset causes a privilege-escalation vulnerability that allows any authenticated user -- even those with minimal system rights -- to gain administrative rights and lock out all other users. All site administrators using this plugin should update to version 3.15, which patches both vulnerabilities. Wordfence has more details about both flaws here.

Oracle

Oracle Ties Previous All-Time Patch High With January 2020 Updates (threatpost.com) 9

"Not sure if this is good news (Oracle is very busy patching their stuff) or bad news (Oracle is very busy patching their stuff) but this quarterly cycle they tied their all-time high number of vulnerability fixes released," writes Slashdot reader bobthesungeek76036. "And they are urging folks to not drag their feet in deploying these patches." Threatpost reports: The software giant patched 300+ bugs in its quarterly update. Oracle has patched 334 vulnerabilities across all of its product families in its January 2020 quarterly Critical Patch Update (CPU). Out of these, 43 are critical/severe flaws carrying CVSS scores of 9.1 and above. The CPU ties for Oracle's previous all-time high for number of patches issued, in July 2019, which overtook its previous record of 308 in July 2017. The company said in a pre-release announcement that some of the vulnerabilities affect multiple products. "Due to the threat posed by a successful attack, Oracle strongly recommends that customers apply Critical Patch Update patches as soon as possible," it added.

"Some of these vulnerabilities were remotely exploitable, not requiring any login data; therefore posing an extremely high risk of exposure," said Boris Cipot, senior security engineer at Synopsys, speaking to Threatpost. "Additionally, there were database, system-level, Java and virtualization patches within the scope of this update. These are all critical elements within a company's infrastructure, and for this reason the update should be considered mandatory. At the same time, organizations need to take into account the impact that this update could have on their systems, scheduling downtime accordingly."

Cellphones

PinePhone Linux Smartphone Shipment Finally Begins (fossbytes.com) 52

Pine64 will finally start shipping the pre-order units of PinePhone Braveheart Edition on January 17, 2020. Fossbytes reports: A year ago, PinePhone was made available only to developers and hackers. After getting better responses and suggestions, the Pine64 developers planned to bring Pinephone for everyone. In November last year, pre-orders for PinePhone Braveheart Edition commenced for everyone. But due to manufacturing issues coming in the way, the shipment date slipped for weeks, which was scheduled in December last year.

PinePhone Braveheart Edition is an affordable, open source Linux-based operating system smartphone preloaded with factory test image running on Linux OS (postmarketOS) on inbuilt storage. You can check on PinePhone Wiki to find the PinePhone compatible operating system such as Ubuntu Touch, postmarketOS, or Sailfish OS, which you can boot either from internal storage or an SD card.

Electronic Frontier Foundation

EFF Files Amicus Brief In Google v. Oracle, Arguing APIs Are Not Copyrightable (eff.org) 147

Areyoukiddingme writes: EFF has filed an amicus brief with the U.S. Supreme Court in Google v. Oracle, arguing that APIs are not copyrightable. From the press release: "The Electronic Frontier Foundation (EFF) today asked the U.S. Supreme Court to rule that functional aspects of Oracle's Java programming language are not copyrightable, and even if they were, employing them to create new computer code falls under fair use protections. The court is reviewing a long-running lawsuit Oracle filed against Google, which claimed that Google's use of certain Java application programming interfaces (APIs) in its Android operating system violated Oracle's copyrights. The case has far-reaching implications for innovation in software development, competition, and interoperability.

In a brief filed today, EFF argues that the Federal Circuit, in ruling APIs were copyrightable, ignored clear and specific language in the copyright statute that excludes copyright protection for procedures, processes, and methods of operation. 'Instead of following the law, the Federal Circuit decided to rewrite it to eliminate almost all the exclusions from copyright protection that Congress put in the statute,' said EFF Legal Director Corynne McSherry. 'APIs are not copyrightable. The Federal Circuit's ruling has created a dangerous precedent that will encourage more lawsuits and make innovative software development prohibitively expensive. Fortunately, the Supreme Court can and should fix this mess.'" Oral arguments before the U.S. Supreme Court are scheduled for March 2020, and a decision by June.

Programming

'We're Approaching the Limits of Computer Power -- We Need New Programmers Now' (theguardian.com) 306

Ever-faster processors led to bloated software, but physical limits may force a return to the concise code of the past. John Naughton: Moore's law is just a statement of an empirical correlation observed over a particular period in history and we are reaching the limits of its application. In 2010, Moore himself predicted that the laws of physics would call a halt to the exponential increases. "In terms of size of transistor," he said, "you can see that we're approaching the size of atoms, which is a fundamental barrier, but it'll be two or three generations before we get that far -- but that's as far out as we've ever been able to see. We have another 10 to 20 years before we reach a fundamental limit." We've now reached 2020 and so the certainty that we will always have sufficiently powerful computing hardware for our expanding needs is beginning to look complacent. Since this has been obvious for decades to those in the business, there's been lots of research into ingenious ways of packing more computing power into machines, for example using multi-core architectures in which a CPU has two or more separate processing units called "cores" -- in the hope of postponing the awful day when the silicon chip finally runs out of road. (The new Apple Mac Pro, for example, is powered by a 28-core Intel Xeon processor.) And of course there is also a good deal of frenzied research into quantum computing, which could, in principle, be an epochal development.

But computing involves a combination of hardware and software and one of the predictable consequences of Moore's law is that it made programmers lazier. Writing software is a craft and some people are better at it than others. They write code that is more elegant and, more importantly, leaner, so that it executes faster. In the early days, when the hardware was relatively primitive, craftsmanship really mattered. When Bill Gates was a lad, for example, he wrote a Basic interpreter for one of the earliest microcomputers, the TRS-80. Because the machine had only a tiny read-only memory, Gates had to fit it into just 16 kilobytes. He wrote it in assembly language to increase efficiency and save space; there's a legend that for years afterwards he could recite the entire program by heart. There are thousands of stories like this from the early days of computing. But as Moore's law took hold, the need to write lean, parsimonious code gradually disappeared and incentives changed.

Programming

How Is Computer Programming Different Today Than 20 Years Ago? (medium.com) 325

This week a former engineer for the Microsoft Windows Core OS Division shared an insightful (and very entertaining) list with "some changes I have noticed over the last 20 years" in the computer programming world. Some excerpts: - Some programming concepts that were mostly theoretical 20 years ago have since made it to mainstream including many functional programming paradigms like immutability, tail recursion, lazily evaluated collections, pattern matching, first class functions and looking down upon anyone who don't use them...

- 3 billion devices run Java. That number hasn't changed in the last 10 years though...

- A package management ecosystem is essential for programming languages now. People simply don't want to go through the hassle of finding, downloading and installing libraries anymore. 20 years ago we used to visit web sites, downloaded zip files, copied them to correct locations, added them to the paths in the build configuration and prayed that they worked.

- Being a software development team now involves all team members performing a mysterious ritual of standing up together for 15 minutes in the morning and drawing occult symbols with post-its....

- Since we have much faster CPUs now, numerical calculations are done in Python which is much slower than Fortran. So numerical calculations basically take the same amount of time as they did 20 years ago...

- Even programming languages took a side on the debate on Tabs vs Spaces....

- Code must run behind at least three levels of virtualization now. Code that runs on bare metal is unnecessarily performant....

- A tutorial isn't really helpful if it's not a video recording that takes orders of magnitude longer to understand than its text.

- There is StackOverflow which simply didn't exist back then. Asking a programming question involved talking to your colleagues.

- People develop software on Macs.

In our new world where internet connectivity is the norm and being offline the exception, "Security is something we have to think about now... Because of side-channel attacks we can't even trust the physical processor anymore."

And of course, "We don't use IRC for communication anymore. We prefer a bloated version called Slack because we just didn't want to type in a server address...."
Education

Are We Teaching Engineers the Wrong Way to Think? (zdnet.com) 125

Tech columnist Chris Matyszczyk summarizes the argument of four researchers who are warning about the perils of pure engineer thought: They write, politely: "Engineers enter the workforce with important analysis skills, but may struggle to 'think outside the box' when it comes to creative problem-solving." The academics blame the way engineers are educated.

They explain there are two sorts of thinking -- convergent and divergent. The former is the one with which engineers are most familiar. You make a list of steps to be taken to solve a problem and you take those steps. You expect a definite answer. Divergent thinking, however, requires many different ways of thinking about a problem and leads to many potential solutions. These academics declare emphatically: "Divergent thinking skills are largely ignored in engineering courses, which tend to focus on a linear progression of narrow, discipline-focused technical information."

Ah, that explains a lot, doesn't it? Indeed, these researchers insist that engineering students "become experts at working individually and applying a series of formulas and rules to structured problems with a 'right' answer."

Oddly, I know several people at Google just like that.

Fortunately, the researchers are also proposing this solution:

"While engineers need skills in analysis and judgment, they also need to cultivate an open, curious, and kind attitude, so they don't fixate on one particular approach and are able to consider new data."
Databases

'Top Programming Skills' List Shows Employers Want SQL (dice.com) 108

Former Slashdot contributor Nick Kolakowski is now a senior editor at Dice Insights, where he's just published a list of the top programming skills employers were looking for during the last 30 days.
If you're a software developer on the hunt for a new gig (or you're merely curious about what programming skills employers are looking for these days), one thing is clear: employers really, really, really want technologists who know how to build, maintain, and scale everything database- (and data-) related.

We've come to that conclusion after analyzing data about programming skills from Burning Glass, which collects and organizes millions of job postings from across the country.

The biggest takeaway? "When it comes to programming skills, employers are hungriest for SQL." Here's their ranking of the top most in-demand skills:
  1. SQL
  2. Java
  3. "Software development"
  4. "Software engineering"
  5. Python
  6. JavaScript
  7. Linux
  8. Oracle
  9. C#
  10. Git

The list actually includes the top 18 programming skills, but besides languages like C++ and .NET, it also includes more generalized skills like "Agile development," "debugging," and "Unix."

But Nick concludes that "As a developer, if you've mastered database and data-analytics skills, that makes you insanely valuable to a whole range of companies out there."


Bug

This Year's Y2K20 Bug Came Directly From 'A Lazy Fix' to the Y2K Bug (newscientist.com) 160

Slashdot reader The8re still remembers the Y2K bug. Now he shares a New Scientist article explaining how it led directly to this year's Y2020 bug -- which affected more than just parking meters: WWE 2K20, a professional wrestling video game, also stopped working at midnight on 1 January 2020. Within 24 hours, the game's developers, 2K, issued a downloadable fix. Another piece of software, Splunk, which ironically looks for errors in computer systems, was found to be vulnerable to the Y2020 bug in November. The company rolled out a fix to users the same week -- which include 92 of the Fortune 100, the top 100 companies in the US....

The Y2020 bug, which has taken many payment and computer systems offline, is a long-lingering side effect of attempts to fix the Y2K, or millennium bug. Both stem from the way computers store dates. Many older systems express years using two numbers -- 98, for instance, for 1998 -- in an effort to save memory. The Y2K bug was a fear that computers would treat 00 as 1900, rather than 2000. Programmers wanting to avoid the Y2K bug had two broad options: entirely rewrite their code, or adopt a quick fix called "windowing", which would treat all dates from 00 to 20, as from the 2000s, rather than the 1900s. An estimated 80 percent of computers fixed in 1999 used the quicker, cheaper option. "Windowing, even during Y2K, was the worst of all possible solutions because it kicked the problem down the road," says Dylan Mulvin at the London School of Economics....

Another date storage problem also faces us in the year 2038. The issue again stems from Unix's epoch time: the data is stored as a 32-bit integer, which will run out of capacity at 3.14 am on 19 January 2038.

Stats

2019's Fastest Growing Programming Language Was C, Says TIOBE (tiobe.com) 106

Which programming language saw the biggest jump on TIOBE's index of language popularity over the last year?

Unlike last year -- it's not Python. An anonymous reader quotes TIOBE.com: It is good old language C that wins the award this time with an yearly increase of 2.4%... The major drivers behind this trend are the Internet of Things (IoT) and the vast amount of small intelligent devices that are released nowadays...

Runners up are C# (+2.1%), Python (+1.4%) and Swift (+0.6%)...

Other interesting winners of 2019 are Swift (from #15 to #9) and Ruby (from #18 to #11). Swift is a permanent top 10 player now and Ruby seems [destined] to become one soon.

Some languages that were supposed to break through in 2019 didn't: Rust won only 3 positions (from #33 to #30), Kotlin lost 3 positions (from #31 to #35), Julia lost even 10 positions (from #37 to #47) and TypeScript won just one position (from #49 to #48).

And here's the new top 10 programming languages right now, according to TIOBE's January 2020 index.
  • Java
  • C
  • Python
  • C++
  • C# (up two positions from January 2019)
  • Visual Basic .NET (down one position from January 2019)
  • JavaScript (down one position from January 2019)
  • PHP
  • Swift (up six positions from January 2019)
  • SQL (down one position from January 2019)

Open Source

Linus Torvalds: Avoid Oracle's ZFS Kernel Code Until 'Litigious' Larry Signs Off (zdnet.com) 247

"Linux kernel head Linus Torvalds has warned engineers against adding a module for the ZFS filesystem that was designed by Sun Microsystems -- and now owned by Oracle -- due to licensing issues," reports ZDNet: As reported by Phoronix, Torvalds has warned kernel developers against using ZFS on Linux, an implementation of OpenZFS, and refuses to merge any ZFS code until Oracle changes the open-source license it uses.

ZFS has long been licensed under Sun's Common Development and Distribution License as opposed to the Linux kernel, which is licensed under GNU General Public License (GPL). Torvalds aired his opinion on the matter in response to a developer who argued that a recent kernel change "broke an important third-party module: ZFS". The Linux kernel creator says he refuses to merge the ZFS module into the kernel because he can't risk a lawsuit from "litigious" Oracle -- which is still trying to sue Google for copyright violations over its use of Java APIs in Android -- and Torvalds won't do so until Oracle founder Larry Ellison signs off on its use in the Linux kernel.

"If somebody adds a kernel module like ZFS, they are on their own. I can't maintain it and I cannot be bound by other people's kernel changes," explained Torvalds. "And honestly, there is no way I can merge any of the ZFS efforts until I get an official letter from Oracle that is signed by their main legal counsel or preferably by Larry Ellison himself that says that yes, it's OK to do so and treat the end result as GPL'd," Torvalds continued.

"Other people think it can be OK to merge ZFS code into the kernel and that the module interface makes it OK, and that's their decision. But considering Oracle's litigious nature, and the questions over licensing, there's no way I can feel safe in ever doing so."

Intel

Intel's First Discrete GPU is Built For Developers (engadget.com) 50

At its CES 2020 keynote, Intel showed off its upcoming Xe discrete graphics chip and today, we're seeing exactly how that's going to be implemented. From a report: First off, Intel unveiled a standalone DG1 "software development vehicle" card that will allow developers to optimize apps for the new graphics system. It didn't reveal any performance details for the card, but did show it running the Warframe game. It also noted that it's now "sampling to ISVs (independent software vendors) worldwide... enabling developers to optimize for Xe." As far as we know right now, Intel's discrete graphics will be chips (not cards) installed together with the CPUs on a single package. However, it's interesting to see Intel graphics in the form of a standalone PCIe card, even one that will never be sold to consumers.
Google

Chrome OS Has Stalled Out 112

Speaking of Chromebooks, David Ruddock, opines at AndroidPolice: Chrome OS' problems really became apparent to me when Android app compatibility was introduced, around five years ago. Getting Android apps to run on Chrome OS was simultaneously one of the Chrome team's greatest achievements and one of its worst mistakes. In 2019, two things are more obvious than ever about the Android app situation on Chrome. The first is that the "build it and they will come" mantra never panned out. Developers never created an appreciable number of Android app experiences designed for Chrome (just as they never did for Android tablets). The second is that, quite frankly, Android apps are very bad on Chrome OS. Performance is highly variable, and interface bugs are basically unending because most of those apps were never designed for a point-and-click operating system. Sure, they crash less often than they did in the early days, but anyone saying that Android apps on Chrome OS are a good experience is delusional.

Those apps are also a crutch that Chrome leans on to this day. Chrome OS doesn't have a robust photo editor? Don't worry, you can download an app! Chrome doesn't have native integration with cloud file services like Box, Dropbox, or OneDrive? Just download the app! Chrome doesn't have Microsoft Office? App! But this "solution" has basically become an insult to Chrome's users, forcing them to live inside a half-baked Android environment using apps that were almost exclusively designed for 6" touchscreens, and which exist in a containerized state that effectively firewalls them from much of the Chrome operating system. As a result, file handling is a nightmare, with only a very limited number of folders accessible to those applications, and the task of finding them from inside those apps a labyrinthine exercise no one should have to endure in 2019. This isn't a tenable state of affairs -- it's computing barbarism as far as I'm concerned. And yet, I've seen zero evidence that the Chrome team intends to fix it. It's just how it is. But Android apps, so far as I can tell, are basically the plan for Chrome. Certainly, Linux environment support is great for enthusiasts and developers, but there are very few commonly-used commercial applications available on Linux, with no sign that will change in the near future. It's another dead end. And if you want an even more depressing picture of Chrome's content ecosystem, just look at the pitiable situation with web apps.
AI

MIT's New Tool Predicts How Fast a Chip Can Run Your Code (thenextweb.com) 13

Folks at the Massachusetts Institute of Technology (MIT) have developed a new machine learning-based tool that will tell you how fast a code can run on various chips. This will help developers tune their applications for specific processor architectures. From a report: Traditionally, developers used the performance model of compilers through a simulation to run basic blocks -- fundamental computer instruction at the machine level -- of code in order to gauge the performance of a chip. However, these performance models are not often validated through real-life processor performance. MIT researchers developed an AI model called Ithmel by training it to predict how fast a chip can run unknown basic blocks. Later, it was supported by a database called BHive with 300,000 basic blocks from specialized fields such as machine learning, cryptography, and graphics. The team of researchers presented a paper [PDF] at the NeuralIPS conference in December to describe a new technique to measure code performance on various processors. The paper also describes Vemal, a new automatically generating algorithm that can be used to generate compiler optimizations.

Slashdot Top Deals