×
Windows

Microsoft Does Not Want You To Use iPerf3 To Measure Network Performance on Windows 60

An anonymous reader shares a report: iPerf is a fairly popular cross-platform tool that is used by many to measure network performance and diagnose any potential issues in this area. The open-source utility is maintained by an organization called Energy Sciences Network (ESnet) and officially supports Linux, Unix, and Windows. However, Microsoft has now published a detailed blog post explaining why you should not use the latest version, iPerf3, on Windows installations.

Microsoft has highlighted three key reasons to discourage the use of iPerf3 on Windows. The first is that ESnet does not support this version on Windows, and recommends iPerf2 instead. On its website, ESnet has emphasized that CentOS 7 Linux, FreeBSD 11, and macOS 10.12 are the only supported platforms. Another very important reason not to use iPerf3 on Windows is that it does not make native OS calls. Instead, it leverages Cygwin as an emulation layer, which obviously comes with a performance penalty. This alone means that iPerf3 on Windows isn't really an ideal candidate for benchmarking your network. While Microsoft has praised the maintainers who are trying to get iPerf3 to run on Windows via emulation, another flaw with this approach is that some advanced networking options simply aren't available on Windows or may behave in unexpected ways.
Operating Systems

VMS Software Prunes OpenVMS Hobbyist Program (theregister.com) 60

Liam Proven reports via The Register: Bad news for those who want to play with OpenVMS in non-production use. Older versions are disappearing, and the terms are getting much more restrictive. The corporation behind the continued development of OpenVMS, VMS Software, Inc. -- or VSI to its friends, if it has any left after this -- has announced the latest Updates to the Community Program. The news does not look good: you can't get the Alpha and Itanium versions any more, only a limited x86-64 edition.

OpenVMS is one of the granddaddies of big serious OSes. A direct descendant of the OSes that inspired DOS, CP/M, OS/2, and Windows, as well as the native OS of the hardware on which Unix first went 32-bit, VMS has been around for nearly half a century. For decades, its various owners have offered various flavors of "hobbyist program" under which you could get licenses to install and run it for free, as long as it wasn't in production use. Since Compaq acquired DEC, then HP acquired Compaq, its prospects looked checkered. HP officially killed it off in 2013, then in 2014 granted it a reprieve and sold it off instead. New owner VSI ported it to x86-64, releasing that new version 9.2 in 2022. Around this time last year, we covered VSI adding AMD support and opening a hobbyist program of its own. It seems from the latest announcement that it has been disappointed by the reception: "Despite our initial aspirations for robust community engagement, the reality has fallen short of our expectations. The level of participation in activities such as contributing open source software, creating wiki articles, and providing assistance on forums has not matched the scale of the program. As a result, we find ourselves at a crossroads, compelled to reassess and recalibrate our approach."

Although HPE stopped offering hobbyist licenses for the original VAX versions of OpenVMS in 2020, VSI continued to maintain OpenVMS 8 (in other words, the Alpha and Itanium editions) while it worked on version 9 for x86-64. VSI even offered a Student Edition, which included a freeware Alpha emulator and a copy of OpenVMS 8.4 to run inside it. Those licenses run out in 2025, and they won't be renewed. If you have vintage DEC Alpha or HP Integrity boxes with Itanic chips, you won't be able to get a legal licensed copy of OpenVMS for them, or renew the license of any existing installations -- unless you pay, of course. There will still be a Community license edition, but from now on it's x86-64 only. Although OpenVMS 9 mainly targets hypervisors anyway, it does support bare-metal operations on a single model of HPE server, the ProLiant DL380 Gen10. If you have one of them to play with -- well, tough. Now Community users only get a VM image, supplied as a VMWare .vmdk file. It contains a ready-to-go "OpenVMS system disk with OpenVMS, compilers and development tools installed." Its license runs for a year, after which you will get a fresh copy. This means you won't be able to configure your own system and keep it alive -- you'll have to recreate it, from scratch, annually. The only alternative for those with older systems is to apply to be an OpenVMS Ambassador.

Unix

OpenBSD 7.5 Released (openbsd.org) 62

Slashdot reader Mononymous writes: The latest release of OpenBSD, the FOSS Unix-like operating system focused on correctness and security over features and performance, has been released. This version includes newer driver support, performance improvements, stability fixes, and lots of package updates. One highlight is a complete port of KDE Plasma 5.

You can view the announcement and get the bits at OpenBSD.org.

Phoronix reports that with OpenBSD 7.5 "there is a number of improvements for ARM (AArch64) hardware, never-ending kernel optimizations and other tuning work, countless package updates, and other adjustments to this popular BSD platform."
Unix

In Development Since 2019, NetBSD 10.0 Finally Released (phoronix.com) 37

"After being in development since 2019, the huge NetBSD 10.0 is out today as a wonderful Easter surprise," reports Phoronix: NetBSD 10 provides WireGuard support, support for many newer Arm platforms including for Apple Silicon and newer Raspberry Pi boards, a new Intel Ethernet drive, support for Realtek 2.5GbE network adapters, SMP performance improvements, automatic swap encryption, and an enormous amount of other hardware support improvements that accumulated over the past 4+ years.

Plus there is no shortage of bug fixes and performance optimizations with NetBSD 10. Some tests of NetBSD 10.0 in development back during 2020 showed at that point it was already 12% faster than NetBSD 9.

"A lot of development went into this new release," NetBSD wrote on their blog, saying "This also caused the release announcement to be one of the longest we ever did."

Among the new userspace programs is warp(6), which they describe as a "classic BSD space war game (copyright donated to the NetBSD Foundation by Larry Wall)."
Desktops (Apple)

Apple Criticized For Changing the macOS version of cURL (daniel.haxx.se) 75

"On December 28 2023, bugreport 12604 was filed in the curl issue tracker," writes cURL lead developer Daniel Stenberg: The title stated of the problem in this case was quite clear: flag -cacert behavior isn't consistent between macOS and Linux , and it was filed by Yuedong Wu.

The friendly reporter showed how the curl version bundled with macOS behaves differently than curl binaries built entirely from open source. Even when running the same curl version on the same macOS machine.

The curl command line option --cacert provides a way for the user to say to curl that this is the exact set of CA certificates to trust when doing the following transfer. If the TLS server cannot provide a certificate that can be verified with that set of certificates, it should fail and return error. This particular behavior and functionality in curl has been established since many years (this option was added to curl in December 2000) and of course is provided to allow users to know that it communicates with a known and trusted server. A pretty fundamental part of what TLS does really.

When this command line option is used with curl on macOS, the version shipped by Apple, it seems to fall back and checks the system CA store in case the provided set of CA certs fail the verification. A secondary check that was not asked for, is not documented and plain frankly comes completely by surprise. Therefore, when a user runs the check with a trimmed and dedicated CA cert file, it will not fail if the system CA store contains a cert that can verify the server!

This is a security problem because now suddenly certificate checks pass that should not pass.

"We don't consider this something that needs to be addressed in our platforms," Apple Product Security responded. Stenberg's blog post responds, "I disagree."

Long-time Slashdot reader lee1 shares their reaction: I started to sour on MacOS about 20 years ago when I discovered that they had, without notice, substituted their own, nonstandard version of the Readline library for the one that the rest of the Unix-like world was using. This broke gnuplot and a lot of other free software...

Apple is still breaking things, this time with serious security and privacy implications.

Unix

Remembering How Plan 9 Evolved at Bell Labs (theregister.com) 36

jd (Slashdot reader #1,658) writes: The Register has been running a series of articles about the evolution of Unix, from humble beginnings to the transition to Plan9. There is a short discussion of why Plan9 and its successors never really took off (despite being vastly superior to microkernels), along with the ongoing development of 9Front.
From the article: Plan 9 was in some way a second implementation of the core concepts of Unix and C, but reconsidered for a world of networked graphical workstations. It took many of the trendy ideas of late-1980s computing, both of academic theories and of the computer industry of the time, and it reinterpreted them through the jaded eyes of two great gurus, Kenneth Thompson and Dennis Ritchie (and their students) — arguably, design geniuses who saw their previous good ideas misunderstood and misinterpreted.

In Plan 9, networking is front and center. There are good reasons why this wasn't the case with Unix — it was being designed and built at the same time as local area networking was being invented. UNIX Fourth Edition, the first version written in C, was released in 1973 — the same year as the first version of Ethernet.

Plan 9 puts networking right into the heart of the design. While Unix was later used as the most common OS for standalone workstations, Plan 9 was designed for clusters of computers, some being graphical desktops and some shared servers...

Because everything really is a file, displaying a window on another machine can be as simple as making a directory and populating it with some files. You can start programs on other computers, but display the results on yours — all without any need for X11 or any visible networking at all.

This means all the Unixy stuff about telnet and rsh and ssh and X forwarding and so on just goes away. It makes X11 look very overcomplicated, and it makes Wayland look like it was invented by Microsoft.

Unix

Remembering Unix Desktops - and What We Can Learn From Them (theregister.com) 155

"As important as its historically underhanded business dealings were for its success, Microsoft didn't have to cheat to win," argues a new article in the Register.

"The Unix companies were doing a great job of killing themselves off." You see, while there were many attempts to create software development standards for Unix, they were too general to do much good — for example Portable Operating System Interface (POSIX) — or they became mired in the business consortium fights between the Open Systems Foundation and Unix International, which became known as the Unix wars.

While the Unix companies were busy ripping each other to shreds, Microsoft was smiling all the way to the bank. The core problem was that the Unix companies couldn't settle on software standards. Independent Software Vendors (ISV) had to write applications for each Unix platform. Each of these had only a minute desktop market share. It simply made no business sense for programmers to write one version of an application for SCO OpenDesktop (also known as OpenDeathtrap), another for NeXTStep, and still another one for SunOS. Does that sound familiar? That kind of thing is still a problem for the Linux desktop, and it's why I'm a big fan of Linux containerized desktop applications, such as Red Hat's Flatpak and Canonical's Snap.

By the time the two sides finally made peace by joining forces in The Open Group in 1996, it was too late. Unix was crowded out on the conventional desktop, and the workstation became pretty much a Sun Microsystems-only play.

Linux's GPL license created an "enforced" consortia that allowed it to take over, according to the article — and with Linus Torvalds as Linux's single leader, "it avoided the old Unix trap of in-fighting... I've been to many Linux Plumbers meetings. There, I've seen him and the top Linux kernel developers work with each other without any drama. Today's Linux is a group effort... The Linux distributors and developers have learned their Unix history lessons. They've realized that it takes more than open source; it takes open standards and consensus to make a successful desktop operating system.
And the article also points out that one of those early Unix desktops "is still alive, well, and running in about one in four desktops." That operating system, of course, is macOS X, the direct descendent of NeXT's NeXTSTEP. You could argue that macOS, based on the multi-threaded, multi-processing microkernel operating system Mach, BSD Unix, and the open source Darwin, is the most successful of all Unix operating systems.
Unix

Should New Jersey's Old Bell Labs Become a 'Museum of the Internet'? (medium.com) 54

"Bell Labs, the historic headwaters of so many inventions that now define our digital age, is closing in Murray Hill," writes journalism professor Jeff Jarvis (in an op-ed for New Jersey's Star-Ledger newspaper).

"The Labs should be preserved as a historic site and more." I propose that Bell Labs be opened to the public as a museum and school of the internet.

The internet would not be possible without the technologies forged at Bell Labs: the transistor, the laser, information theory, Unix, communications satellites, fiber optics, advances in chip design, cellular phones, compression, microphones, talkies, the first digital art, and artificial intelligence — not to mention, of course, many advances in networks and the telephone, including the precursor to the device we all carry and communicate with today: the Picturephone, displayed as a futuristic fantasy at the 1964 World's Fair.

There is no museum of the internet. Silicon Valley has its Computer History Museum. New York has museums for television and the moving image. Massachusetts boasts a charming Museum of Printing. Search Google for a museum of the internet and you'll find amusing digital artifacts, but nowhere to immerse oneself in and study this immensely impactful institution in society.

Where better to house a museum devoted to the internet than New Jersey, home not only of Bell Labs but also at one time the headquarters of the communications empire, AT&T, our Ma Bell...? The old Bell Labs could be more than a museum, preserving and explaining the advances that led to the internet. It could be a school... Imagine if Bell Labs were a place where scholars and students in many disciplines — technologies, yes, but also anthropology, sociology, psychology, history, ethics, economics, community studies, design — could gather to teach and learn, discuss and research.

The text of Jarvis's piece is behind subscription walls, but has apparently been re-published on X by innovation theorist John Nosta.

In one of the most interesting passages, Jarvis remembers visiting Bell Labs in 1995. "The halls were haunted with genius: lab after lab with benches and blackboards and history within. We must not lose that history."
Wine

Wine 9.0 Released (9to5linux.com) 15

Version 9.0 of Wine, the free and open-source compatibility layer that lets you run Windows apps on Unix-like operating systems, has been released. "Highlights of Wine 9.0 include an experimental Wayland graphics driver with features like basic window management, support for multiple monitors, high-DPI scaling, relative motion events, as well as Vulkan support," reports 9to5Linux. From the report: The Vulkan driver has been updated to support Vulkan 1.3.272 and later, the PostScript driver has been reimplemented to work from Windows-format spool files and avoid any direct calls from the Unix side, and there's now a dark theme option on WinRT theming that can be enabled in WineCfg. Wine 9.0 also adds support for many more instructions to Direct3D 10 effects, implements the Windows Media Video (WMV) decoder DirectX Media Object (DMO), implements the DirectShow Audio Capture and DirectShow MPEG-1 Video Decoder filters, and adds support for video and system streams, as well as audio streams to the DirectShow MPEG-1 Stream Splitter filter.

Desktop integration has been improved in this release to allow users to close the desktop window in full-screen desktop mode by using the "Exit desktop" entry in the Start menu, as well as support for export URL/URI protocol associations as URL handlers to the Linux desktop. Audio support has been enhanced in Wine 9.0 with the implementation of several DirectMusic modules, DLS1 and DLS2 sound font loading, support for the SF2 format for compatibility with Linux standard MIDI sound fonts, Doppler shift support in DirectSound, Indeo IV50 Video for Windows decoder, and MIDI playback in dmsynth.

Among other noteworthy changes, Wine 9.0 brings loader support for ARM64X and ARM64EC modules, along with the ability to run existing Windows binaries on ARM64 systems and initial support for building Wine for the ARM64EC architecture. There's also a new 32-bit x86 emulation interface, a new WoW64 mode that supports running of 32-bit apps on recent macOS versions that don't support 32-bit Unix processes, support for DirectInput action maps to improve compatibility with many old video games that map controller inputs to in-game actions, as well as Windows 10 as the default Windows version for new prefixes. Last but not least, the kernel has been updated to support address space layout randomization (ASLR) for modern PE binaries, better memory allocation performance through the Low Fragmentation Heap (LFH) implementation, and support memory placeholders in the virtual memory allocator to allow apps to reserve virtual space. Wine 9.0 also adds support for smart cards, adds support for Diffie-Hellman keys in BCrypt, implements the Negotiate security package, adds support for network interface change notifications, and fixes many bugs.
For a full list of changes, check out the release notes. You can download Wine 9.0 from WineHQ.
Stats

What Were Slashdot's Top 10 Stories of 2023? 22

Slashdot's 10 most-visited stories of 2023 seemed to touch on all the themes of the year, with a story about AI, two about electric cars, two stories about Linux, and two about the Rust programming language.

And at the top of this list, the #1 story of the year drew over 100,000 views...

Interestingly, a story that ran on New Year's Eve of 2022 attracted so much traffic, it would've been the second-most visited story for all of 2023 — if it had run just a few hours later. That story?

Systemd's Growth Over 2022.

Social Networks

The Rise and Fall of Usenet (zdnet.com) 130

An anonymous reader quotes a report from ZDNet: Long before Facebook existed, or even before the Internet, there was Usenet. Usenet was the first social network. Now, with Google Groups abandoning Usenet, this oldest of all social networks is doomed to disappear. Some might say it's well past time. As Google declared, "Over the last several years, legitimate activity in text-based Usenet groups has declined significantly because users have moved to more modern technologies and formats such as social media and web-based forums. Much of the content being disseminated via Usenet today is binary (non-text) file sharing, which Google Groups does not support, as well as spam." True, these days, Usenet's content is almost entirely spam, but in its day, Usenet was everything that Twitter and Reddit would become and more.

In 1979, Duke University computer science graduate students Tom Truscott and Jim Ellis conceived of a network of shared messages under various topics. These messages, also known as articles or posts, were submitted to topic categories, which became known as newsgroups. Within those groups, messages were bound together in threads and sub-threads. [...] In 1980, Truscott and Ellis, using the Unix to Unix Copy Protocol (UUCP), hooked up with the University of North Carolina to form the first Usenet nodes. From there, it would rapidly spread over the pre-Internet ARPANet and other early networks. These messages would be stored and retrieved from news servers. These would "peer" to each other so that messages to a newsgroup would be shared from server to server and to user to user so that within hours, your messages would reach the entire networked world. Usenet would evolve its own network protocol, Network News Transfer Protocol (NNTP), to speed the transfer of these messages. Today, the social network Mastodon uses a similar approach with the ActivityPub protocol, while other social networks, such as Threads, are exploring using ActivityPub to connect with Mastodon and the other social networks that support ActivityPub. As the saying goes, everything old is new again.

[...] Usenet was never an organized social network. Each server owner could -- and did -- set its own rules. Mind you, there was some organization to begin with. The first 'mainstream' Usenet groups, comp, misc, news, rec, soc, and sci hierarchies, were widely accepted and disseminated until 1987. Then, faced with a flood of new groups, a new naming plan emerged in what was called the Great Renaming. This led to a lot of disputes and the creation of the talk hierarchy. This and the first six became known as the Big Seven. Then the alt groups emerged as a free speech protest. Afterward, fewer Usenet sites made it possible to access all the newsgroups. Instead, maintainers and users would have to decide which one they'd support. Over the years, Usenet began to decline as discussions were replaced both by spam and flame wars. Group discussions were also overwhelmed by flame wars.
"If, going forward, you want to keep an eye on Usenet -- things could change, miracles can happen -- you'll need to get an account from a Usenet provider," writes ZDNet's Steven Vaughan-Nichols. "I favor Eternal September, which offers free access to the discussion Usenet groups; NewsHosting, $9.99 a month with access to all the Usenet groups; EasyNews, $9.98 a month with fast downloads, and a good search engine; and Eweka, 9.50 Euros a month and EU only servers."

"You'll also need a Usenet client. One popular free one is Mozilla's Thunderbird E-Mail client, which doubles as a Usenet client. EasyNews also offers a client as part of its service. If you're all about downloading files, check out SABnzbd."
Microsoft

When Linux Spooked Microsoft: Remembering 1998's Leaked 'Halloween Documents' (catb.org) 59

It happened a quarter of a century ago. The New York Times wrote that "An internal memorandum reflecting the views of some of Microsoft's top executives and software development managers reveals deep concern about the threat of free software and proposes a number of strategies for competing against free programs that have recently been gaining in popularity." The memo warns that the quality of free software can meet or exceed that of commercial programs and describes it as a potentially serious threat to Microsoft. The document was sent anonymously last week to Eric Raymond, a key figure in a loosely knit group of software developers who collaboratively create and distribute free programs ranging from operating systems to Web browsers. Microsoft executives acknowledged that the document was authentic...

In addition to acknowledging that free programs can compete with commercial software in terms of quality, the memorandum calls the free software movement a "long-term credible" threat and warns that employing a traditional Microsoft marketing strategy known as "FUD," an acronym for "fear, uncertainty and doubt," will not succeed against the developers of free software. The memorandum also voices concern that Linux is rapidly becoming the dominant version of Unix for computers powered by Intel microprocessors.

The competitive issues, the note warns, go beyond the fact that the software is free. It is also part of the open-source software, or O.S.S., movement, which encourages widespread, rapid development efforts by making the source code — that is, the original lines of code written by programmers — readily available to anyone. This enables programmers the world over to continually write or suggest improvements or to warn of bugs that need to be fixed. The memorandum notes that open software presents a threat because of its ability to mobilize thousands of programmers. "The ability of the O.S.S. process to collect and harness the collective I.Q. of thousands of individuals across the Internet is simply amazing," the memo states. "More importantly, O.S.S. evangelization scales with the size of the Internet much faster than our own evangelization efforts appear to scale."

Back in 1998, Slashdot's CmdrTaco covered the whole brouhaha — including this CNN article: A second internal Microsoft memo on the threat Linux poses to Windows NT calls the operating system "a best-of-breed Unix" and wonders aloud if the open-source operating system's momentum could be slowed in the courts.

As with the first "Halloween Document," the memo — written by product manager Vinod Valloppillil and another Microsoft employee, Josh Cohen — was obtained by Linux developer Eric Raymond and posted on the Internet. In it, Cohen and Valloppillil, who also authored the first "Halloween Document," appear to suggest that Microsoft could slow the open-source development of Linux with legal battles. "The effect of patents and copyright in combating Linux remains to be investigated," the duo wrote.

Microsoft's slogain in 1998 was "Where do you want to go today?" So Eric Raymond published the documents on his web site under the headline "Where will Microsoft try to drag you today? Do you really want to go there?"

25 years later, and it's all still up there and preserved for posterity on Raymond's web page — a collection of leaked Microsoft documents and related materials known collectively as "the Halloween documents." And Raymond made a point of thanking the writers of the documents, "for authoring such remarkable and effective testimonials to the excellence of Linux and open-source software in general."

Thanks to long-time Slashdot reader mtaht for remembering the documents' 25th anniversary...
Python

Experimental Project Attempts a Python Virtual Shell for Linux (cjshayward.com) 62

Long-time Slashdot reader CJSHayward shares "an attempt at Python virtual shell."

The home-brewed project "mixes your native shell with Python with the goal of letting you use your regular shell but also use Python as effectively a shell scripting language, as an alternative to your shell's built-in scripting language... I invite you to explore and improve it!"

From the web site: The Python Virtual Shell (pvsh or 'p' on the command line) lets you mix zsh / bash / etc. built-in shell scripting with slightly modified Python scripting. It's kind of like Brython [a Python implementation for client-side web programming], but for the Linux / Unix / Mac command line...

The core concept is that all Python code is indented with tabs, with an extra tab at the beginning to mark Python code, and all shell commands (including some shell builtins) have zero tabs of indentation. They can be mixed line-by-line, offering an opportunity to use built-in zsh, bash, etc. scripting or Python scripting as desired.

The Python is an incomplete implementation; it doesn't support breaking a line into multiple lines. Nonetheless, this offers a tool to fuse shell- and Python-based interactions from the Linux / Unix / Mac command line.

Open Source

OpenBSD 7.4 Released (phoronix.com) 8

Long-time Slashdot reader Noryungi writes: OpenBSD 7.4 has been officially released. The 55th release of this BSD operating system, known for being security oriented, brings a lot of new things, including dynamic tracer, pfsync improvements, loads of security goodies and virtualization improvements. Grab your copy today! As mentioned by Phoronix's Michael Larabel, some of the key highlights include:

- Dynamic Tracer (DT) and Utrace support on AMD64 and i386 OpenBSD
- Power savings for those running OpenBSD 7.4 on Apple Silicon M1/M2 CPUs by allowing deep idle states when available for the idle loop and suspend
- Support for the PCIe controller found on Apple M2 Pro/Max SoCs
- Allow updating AMD CPU Microcode updating when a newer patch is available
- A workaround for the AMD Zenbleed CPU bug
- Various SMP improvements
- Updating the Direct Rendering Manager (DRM) graphics driver support against the upstream Linux 6.1.55 state
- New drivers for supporting various Qualcomm SoC features
- Support for soft RAID disks was improved for the OpenBSD installer
- Enabling of Indirect Branch Tracking (IBT) on x86_64 and Branch Target Identifier (BTI) on ARM64 for capable processors

You can download and view all the new changes via OpenBSD.org.
GNU is Not Unix

GNU Celebrates Its 40th Anniversary (fsf.org) 49

Wednesday the Free Software Foundation celebrated "the 40th anniversary of the GNU operating system and the launch of the free software movement," with an announcement calling it "a turning point in the history of computing.

"Forty years later, GNU and free software are even more relevant. While software has become deeply ingrained into everyday life, the vast majority of users do not have full control over it... " On September 27, 1983, a computer scientist named Richard Stallman announced the plan to develop a free software Unix-like operating system called GNU, for "GNU's not Unix." GNU is the only operating system developed specifically for the sake of users' freedom, and has remained true to its founding ideals for forty years. Since 1983, the GNU Project has provided a full, ethical replacement for proprietary operating systems. This is thanks to the forty years of tireless work from volunteer GNU developers around the world.

When describing GNU's history and the background behind its initial announcement, Stallman (often known simply as "RMS") stated, "with a free operating system, we could again have a community of cooperating hackers — and invite anyone to join. And anyone would be able to use a computer without starting out by conspiring to deprive his or her friends."

"When we look back at the history of the free software movement — or the idea that users should be in control of their own computing — it starts with GNU," said Zoë Kooyman, executive director of the FSF, which sponsors GNU's development. "The GNU System isn't just the most widely used operating system that is based on free software. GNU is also at the core of a philosophy that has guided the free software movement for forty years."

Usually combined with the kernel Linux, GNU forms the backbone of the Internet and powers millions of servers, desktops, and embedded computing devices. Aside from its technical advancements, GNU pioneered the concept of "copyleft," the approach to software licensing that requires the same rights to be preserved in derivative works, and is best exemplified by the GNU General Public License (GPL). As Stallman stated, "The goal of GNU was to give users freedom, not just to be popular. So we needed to use distribution terms that would prevent GNU software from being turned into proprietary software. The method we use is called 'copyleft.'"

The free software community has held strong for forty years and continues to grow, as exemplified by the FSF's annual LibrePlanet conference on software freedom and digital ethics.

Kooyman continues, "We hope that the fortieth anniversary will inspire hackers, both old and new, to join GNU in its goal to create, improve, and share free software around the world. Software is controlling our world these days, and GNU is a critique and solution to the status quo that we desperately need in order to not have our technology control us."

"In honor of GNU's fortieth anniversary, its organizational sponsor the FSF is organizing a hackday for families, students, and anyone interested in celebrating GNU's anniversary. It will be held at the FSF's offices in Boston, MA on October 1."
Open Source

The Future of Open Source is Still Very Much in Flux (technologyreview.com) 49

Free and open software have transformed the tech industry. But we still have a lot to work out to make them healthy, equitable enterprises. From a report: When Xerox donated a new laser printer to MIT in 1980, the company couldn't have known that the machine would ignite a revolution. While the early decades of software development generally ran on a culture of open access, this new printer ran on inaccessible proprietary software, much to the horror of Richard M. Stallman, then a 27-year-old programmer at the university.

A few years later, Stallman released GNU, an operating system designed to be a free alternative to one of the dominant operating systems at the time: Unix. The free-software movement was born, with a simple premise: for the good of the world, all code should be open, without restriction or commercial intervention. Forty years later, tech companies are making billions on proprietary software, and much of the technology around us is inscrutable. But while Stallman's movement may look like a failed experiment, the free and open-source software movement is not only alive and well; it has become a keystone of the tech industry.

Python

Codon Compiler For Python Is Fast - but With Some Caveats (usenix.org) 36

For 16 years, Rik Farrow has been an editor for the long-running nonprofit Usenix. He's also been a consultant for 43 years (according to his biography at Usenix.org) — and even wrote the 1988 book Unix System Security: How to Protect Your Data and Prevent Intruders.

Today Farrow stopped by Slashdot to share his thoughts on Codon. rikfarrow writes: Researchers at MIT decided to build a compiler focused on speeding up genomics processing... Recently, they have posted their code on GitHub, and I gave it a test drive.
"Managed" languages produce code for a specific runtime (like JavaScript). Now Farrow's article at Usenix.org argues that Codon produces code "much faster than other managed languages, and in some cases faster than C/C++."

Codon-compiled code is faster because "it's compiled, variables are typed at compile time, and it supports parallel execution." But there's some important caveats: The "version of Python" part is actually an important point: the builders of Codon have built a compiler that accepts a large portion of Python, including all of the most commonly used parts — but not all... Duck typing means that the Codon compiler uses hints found in the source or attempts to deduce them to determine the correct type, and assigns that as a static type. If you wanted to process data where the type is unknown before execution, this may not work for you, although Codon does support a union type that is a possible workaround. In most cases of processing large data sets, the types are known in advance so this is not an issue...

Codon is not the same as Python, in that the developers have not yet implemented all the features you would find in Python 3.10, and this, along with duck typing, will likely cause problems if you just try and compile existing scripts. I quickly ran into problems, as I uncovered unsupported bits of Python, and, by looking at the Issues section of their Github pages, so have other people.

Codon supports a JIT feature, so that instead of attempting to compile complete scripts, you can just add a @codon.jit decorator to functions that you think would benefit from being compiled or executed in parallel, becoming much faster to execute...

Whether your projects will benefit from experimenting with Codon will mean taking the time to read the documentation. Codon is not exactly like Python. For example, there's support for Nvidia GPUs included as well and I ran into a limitation when using a dictionary. I suspect that some potential users will appreciate that Codon takes Python as input and produces executables, making the distribution of code simpler while avoiding disclosure of the source. Codon, with its LLVM backend, also seems like a great solution for people wanting to use Python for embedded projects.

My uses of Python are much simpler: I can process millions of lines of nginx logs in seconds, so a reduction in execution time means little to me. I do think there will be others who can take full advantage of Codon.

Farrow's article also points out that Codon "must be licensed for commercial use, but versions older than three years convert to an Apache license. Non-commercial users are welcome to experiment with Codon."
Desktops (Apple)

Unix Pioneer Ken Thompson Announces He's Switching From Mac To Linux (youtube.com) 175

The closing keynote at the SCaLE 20x conference was delivered by 80-year-old Ken Thompson (co-creator of Unix, Plan9, UTF8, and the Go programming language).

Slashdot reader motang shared Thompson's answer to a question at the end about what operating system he uses today: I have, for most of my life — because I was sort of born into it — run Apple.

Now recently, meaning within the last five years, I've become more and more depressed, and what Apple is doing to something that should allow you to work is just atrocious. But they are taking a lot of space and time to do it, so it's okay.

And I have come, within the last month or two, to say, even though I've invested, you know, a zillion years in Apple — I'm throwing it away. And I'm going to Linux. To Raspbian in particular.

IBM

The SCO Lawsuit: Looking Back 20 Years Later (lwn.net) 105

"On March 7, 2003, a struggling company called The SCO Group filed a lawsuit against IBM," writes LWN.net, "claiming that the success of Linux was the result of a theft of SCO's technology..."

Two decades later, "It is hard to overestimate how much the community we find ourselves in now was shaped by a ridiculous lawsuit 20 years ago...." It was the claim of access to Unix code that was the most threatening allegation for the Linux community. SCO made it clear that, in its opinion, Linux was stolen property: "It is not possible for Linux to rapidly reach UNIX performance standards for complete enterprise functionality without the misappropriation of UNIX code, methods or concepts". To rectify this "misappropriation", SCO was asking for a judgment of at least $1 billion, later increased to $5 billion. As the suit dragged on, SCO also started suing Linux users as it tried to collect a tax for use of the system.

Though this has never been proven, it was widely assumed at the time that SCO's real objective was to prod IBM into acquiring the company. That would have solved SCO's ongoing business problems and IBM, for rather less than the amount demanded in court, could have made an annoying problem go away and also lay claim to the ownership of Unix — and, thus, Linux. To SCO's management, it may well have seemed like a good idea at the time. IBM, though, refused to play that game; the company had invested heavily into Linux in its early days and was uninterested in allowing any sort of intellectual-property taint to attach to that effort. So the company, instead, directed its not inconsiderable legal resources to squashing this attack. But notably, so did the development community as a whole, as did much of the rest of the technology industry.

Over the course of the following years — far too many years — SCO's case fell to pieces. The "misappropriated" technology wasn't there. Due to what must be one of the worst-written contracts in technology-industry history, it turned out that SCO didn't even own the Unix copyrights it was suing over. The level of buffoonery was high from the beginning and got worse; the company lost at every turn and eventually collapsed into bankruptcy.... Microsoft, which had not yet learned to love Linux, funded SCO and loudly bought licenses from the company. Magazines like Forbes were warning the "Linux-loving crunchies in the open-source movement" that they "should wake up". SCO was suggesting a license fee of $1,399 — per-CPU — to run Linux.... Such an effort, in less incompetent hands, could easily have damaged Linux badly.

As it went, SCO, despite its best efforts, instead succeeded in improving the position of Linux — in development, legal, and economic terms — considerably.

The article argues SCO's lawsuit ultimately proved that Linux didn't contain copyrighted code "in a far more convincing way than anybody else could have." (And the provenance of all Linux code contributions are now carefully documented.) The case also proved the need for lawyers to vigorously defend the rights of open source programmers. And most of all, it revealed the Linux community was widespread and committed.

And "Twenty years later, it is fair to say that Linux is doing a little better than The SCO Group. Its swaggering leader, who thought to make his fortune by taxing Linux, filed for personal bankruptcy in 2020."
Programming

GitHub Claims Source Code Search Engine Is a Game Changer (theregister.com) 39

Thomas Claburn writes via The Register: GitHub has a lot of code to search -- more than 200 million repositories -- and says last November's beta version of a search engine optimized for source code that has caused a "flurry of innovation." GitHub engineer Timothy Clem explained that the company has had problems getting existing technology to work well. "The truth is from Solr to Elasticsearch, we haven't had a lot of luck using general text search products to power code search," he said in a GitHub Universe video presentation. "The user experience is poor. It's very, very expensive to host and it's slow to index." In a blog post on Monday, Clem delved into the technology used to scour just a quarter of those repos, a code search engine built in Rust called Blackbird.

Blackbird currently provides access to almost 45 million GitHub repositories, which together amount to 115TB of code and 15.5 billion documents. Shifting through that many lines of code requires something stronger than grep, a common command line tool on Unix-like systems for searching through text data. Using ripgrep on an 8-core Intel CPU to run an exhaustive regular expression query on a 13GB file in memory, Clem explained, takes about 2.769 seconds, or 0.6GB/sec/core. [...] At 0.01 queries per second, grep was not an option. So GitHub front-loaded much of the work into precomputed search indices. These are essentially maps of key-value pairs. This approach makes it less computationally demanding to search for document characteristics like the programming language or word sequences by using a numeric key rather than a text string. Even so, these indices are too large to fit in memory, so GitHub built iterators for each index it needed to access. According to Clem, these lazily return sorted document IDs that represent the rank of the associated document and meet the query criteria.

To keep the search index manageable, GitHub relies on sharding -- breaking the data up into multiple pieces using Git's content addressable hashing scheme and on delta encoding -- storing data differences (deltas) to reduce the data and metadata to be crawled. This works well because GitHub has a lot of redundant data (e.g. forks) -- its 115TB of data can be boiled down to 25TB through deduplication data-shaving techniques. The resulting system works much faster than grep -- 640 queries per second compared to 0.01 queries per second. And indexing occurs at a rate of about 120,000 documents per second, so processing 15.5 billion documents takes about 36 hours, or 18 for re-indexing since delta (change) indexing reduces the number of documents to be crawled.

Slashdot Top Deals