×
The Almighty Buck

Epic's New Program Lets Developers Keep Their Revenue In Exchange For Exclusivity (theverge.com) 24

An anonymous reader quotes a report from The Verge: Epic Games will let developers keep 100 percent of their net revenues from the Epic Games Store for six months if they choose to make their games or apps exclusives for that time through its new First Run program, the company announced on Wednesday. Typically, Epic lets developers keep 88 percent of their revenues, with the company taking a 12 percent cut. For developers who launch a product through First Run, the split will return to 88 / 12 once the six months are up.

Developers who choose to participate in the Epic First Run program will see a few other benefits as well. Epic says First Run games and apps will be presented to Store users with "new exclusive badging, homepage placements, and dedicated collections" and will be featured in "relevant store campaigns including sales, events, and editorial as applicable." The program is open now, and the first products that will be eligible to be part of the program must launch on or after October 16th. [...] However, developers can be a part of First Run and still release their products on their own stores.
Here's what Epic says about which products are eligible: "A new release game or app which has not been previously released on another third-party PC store or included in a subscription service available on another third-party PC store. Games or apps with a pre-existing exclusivity deal with the Epic Games Store are not eligible for the program."
AI

Meta Releases Code Llama, a Code-Generating AI Model (techcrunch.com) 20

Meta, intent on making a splash in a generative AI space rife with competition, is on something of an open source tear. From a report: Following the release of AI models for generating text, translating languages and creating audio, the company today open sourced Code Llama, a machine learning system that can generate and explain code in natural language -- specifically English. Akin to GitHub Copilot and Amazon CodeWhisperer, as well as open source AI-powered code generators like StarCoder, StableCode and PolyCoder, Code Llama can complete code and debug existing code across a range of programming languages, including Python, C++, Java, PHP, Typescript, C# and Bash.

"At Meta, we believe that AI models, but large language models for coding in particular, benefit most from an open approach, both in terms of innovation and safety," Meta wrote in a blog post shared with TechCrunch. "Publicly available, code-specific models can facilitate the development of new technologies that improve peoples' lives. By releasing code models like Code Llama, the entire community can evaluate their capabilities, identify issues and fix vulnerabilities." Code Llama, which is available in several flavors, including a version optimized for Python and a version fine-tuned to understand instructions (e.g. "Write me a function that outputs the fibonacci sequence"), is based on the Llama 2 text-generating model that Meta open sourced earlier this month. While Llama 2 could generate code, it wasn't necessarily good code -- certainly not up to the quality a purpose-built model like Copilot could produce.

Bitcoin

Bitcoin Developers Push Back Against Craig Wright's Claim to Billions of Dollars in Bitcoin (coindesk.com) 82

Long-time Slashdot reader UnknowingFool writes: In 2021, Craig Wright sued 12 bitcoin developers who refused help him recover 111,000 bitcoins he claimed were lost in a hack. His company, Tulip Trading, wanted the developers to put in a backdoor mechanism in bitcoin that would override the ownership of the coins, arguing it was the developers "fiduciary duty" to assist him. The developers allege (PDF) that Tulip and Wright never owned the coins and the evidence of ownership provided is "fabricated." Tulip Trading "never owned the digital assets and has commenced this claim fraudulently and in reliance on fabricated documents," the developers' lawyers said in a statement. "Dr. Wright has a long history of fraud, forgery, and dishonesty ... [and is using] the English courts as an instrument of fraud."
Java

IBM Says Its Generative AI Tool Can Convert Old COBOL Code To Java (theregister.com) 108

IBM is introducing the watsonx Code Assistant for Z, a tool that uses generative AI to translate COBOL code to Java. This tool is set to be available in Q4 2023 and aims to speed up the translation of COBOL to Java on IBM's Z mainframes. The Register reports: According to IBM, there are billions of lines of COBOL code out there as potential candidates for modernization (a report last year estimated the total figure at 775-850 billion lines). For this reason, the generative AI features in watsonx Code Assistant for Z are intended to help developers to assess and determine the code most in need of modernization, allowing them to more speedily update large applications and focus on critical tasks.

IBM wants to provide tooling for each step of the modernization process, starting with its Application Discovery and Delivery Intelligence (ADDI) inventory and analysis tool. Other steps include refactoring business services in COBOL, transforming the code to Java code, and then validating the resulting outcome with the aid of automated testing. The resulting Java code emitted by watsonx Code Assistant for Z will be object-oriented, but will still interoperate with the rest of the COBOL application IBM claimed, as well as with key services such as CICS, IMS, DB2, and other z/OS runtimes.

Microsoft

Microsoft Announces Python In Excel 92

theodp writes: On Tuesday, Microsoft announced the Public Preview of Python in Excel, which "runs securely on the Microsoft Cloud".

From the Home Office in Redmond: "Python is one of the most popular programming languages today, loved by businesses and students alike and Excel is an essential tool to organize, manipulate and analyze all kinds of data. But, until now, there hasn't been an easy way to make those two worlds work together. Today, we are excited to introduce the Public Preview of Python in Excel -- making it possible to integrate Python and Excel analytics within the same Excel grid for uninterrupted workflow. Python in Excel combines Python's powerful data analysis and visualization libraries with Excel's features you know and love. You can manipulate and explore data in Excel using Python plots and libraries, and then use Excel's formulas, charts and PivotTables to further refine your insights...We're partnering with Anaconda, a leading enterprise grade Python repository used by tens of millions of data practitioners worldwide. Python in Excel leverages Anaconda Distribution for Python running in Azure, which includes the most popular Python libraries such as pandas for data manipulation, statsmodels for advanced statistical modeling, and Matplotlib and seaborn for data visualization....While in Preview, Python in Excel will be included with your Microsoft 365 subscription. After the Preview, some functionality will be restricted without a paid license."

Python creator Guido van Rossum, now a Microsoft Distinguished Engineer, helped define the architecture for Python in Excel and had this to say: "I'm excited that this excellent, tight integration of Python and Excel is now seeing the light of day. I expect that both communities will find interesting new uses in this collaboration, amplifying each partner's abilities. When I joined Microsoft three years ago, I would not have dreamed this would be possible. The Excel team excels!"
Programming

Can You Measure Software Developer Productivity? (mckinsey.com) 157

Long-time Slashdot reader theodp writes: Measuring, tracking, and benchmarking developer productivity has long been considered a black box. It doesn't have to be that way." So begins global management consulting firm McKinsey in Yes, You Can Measure Software Developer Productivity... "Compared with other critical business functions such as sales or customer operations, software development is perennially undermeasured. The long-held belief by many in tech is that it's not possible to do it correctly—and that, in any case, only trained engineers are knowledgeable enough to assess the performance of their peers.

"Yet that status quo is no longer sustainable."

"All C-suite leaders who are not engineers or who have been in management for a long time will need a primer on the software development process and how it is evolving," McKinsey advises companies starting on a developer productivity initiative. "Assess your systems. Because developer productivity has not typically been measured at the level needed to identify improvement opportunities, most companies' tech stacks will require potentially extensive reconfiguration. For example, to measure test coverage (the extent to which areas of code have been adequately tested), a development team needs to equip their codebase with a tool that can track code executed during a test run."

Before getting your hopes up too high over McKinsey's 2023 developer productivity silver bullet suggestions, consider that Googling to "find a tool that can track code executed during a test run" will lead you back to COBOL test coverage tools from the 80's that offered this kind of capability and 40+ year-old papers that offered similar advice (1, 2, 3). A cynic might also suggest considering McKinsey's track record, which has had some notable misses.

Programming

Rust Users Push Back as Popular 'Serde' Project Ships Precompiled Binaries (bleepingcomputer.com) 17

"Serde, a popular Rust (de)serialization project, has decided to ship its serde_derive macro as a precompiled binary," reports Bleeping Computer.

"The move has generated a fair amount of push back among developers who worry about its future legal and technical implications, along with a potential for supply chain attacks, should the maintainer account publishing these binaries be compromised." According to the Rust package registry, crates.io, serde has been downloaded over 196 million times over its lifetime, whereas the serde_derive macro has scored more than 171 million downloads, attesting to the project's widespread circulation... The Serde ecosystem consists of data structures that know how to serialize and deserialize themselves along with data formats that know how to serialize and deserialize other things," states the project's website. Whereas, "derive" is one of its macros...

Some Rust developers request that precompiled binaries be kept optional and separate from the original "serde_derive" crate, while others have likened the move to the controversial code change to the Moq .NET project that sparked backlash. "Please consider moving the precompiled serde_derive version to a different crate and default serde_derive to building from source so that users that want the benefit of precompiled binary can opt-in to use it," requested one user. "Or vice-versa. Or any other solution that allows building from source without having to patch serde_derive... Having a binary shipped as part of the crate, while I understand the build time speed benefits, is for security reasons not a viable solution for some library users."

Users pointed out how the change could impact entities that are "legally not allowed to redistribute pre-compiled binaries, by their own licenses," specifically mentioning government-regulated environments.

The official response from Serde's maintainer: "The precompiled implementation is the only supported way to use the macros that are published in serde_derive. If there is implementation work needed in some build tools to accommodate it, someone should feel free to do that work (as I have done for Buck and Bazel, which are tools I use and contribute significantly to) or publish your own fork of the source code under a different name.

"Separately, regarding the commentary above about security, the best path forward would be for one of the people who cares about this to invest in a Cargo or crates.io RFC around first-class precompiled macros so that there is an approach that would suit your preferences; serde_derive would adopt that when available."
Programming

Why DARPA Hopes To 'Distill' Old Binaries Into Readable Code (theregister.com) 54

Researchers at Georgia Tech have developed a prototype pipeline for the Defense Advanced Research Projects Agency (DARPA) that can "distill" binary executables into human-intelligible code so that it can be updated and deployed in "weeks, days, or hours, in some cases." The work is part of a five-year, $10 million project with the agency. The Register reports: After running an executable through the university's "distillation" process, software engineers should be able to examine the generated HAR, figure out what the code does, and make changes to add new features, patch bugs, or improve security, and turn the HAR back into executable code, says GT associate professor and project participant Brendan Saltaformaggio. This would be useful for, say, updating complex software that was written by a contractor or internal team, the source code is no longer or never was to hand and neither are its creators, and stuff needs to be fixed up. Reverse engineering the binary and patching in an update by hand can be a little hairy, hence DARPA's desire for something a bit more solid and automatic. The idea is to use this pipeline to freshen up legacy or outdated software that may have taken years and millions of dollars to develop some time ago.

Saltaformaggio told El Reg his team has the entire process working from start to finish, and with some level of stability, too. "DARPA sets challenges they like to use to test the capabilities of a project," he told us over the phone. "So far we've handled every challenge problem DARPA's thrown at us, so I'd say it's working pretty well." Saltaformaggio said his team's pipeline disassembles binaries into a graph structure with pseudo-code, and presented in a way that developers can navigate, and replace or add parts in C and C++. Sorry, Java devs and Pythonistas: Saltaformaggio tells us that there's no reason the system couldn't work with other programming languages, "but we're focused on C and C++. Other folks would need to build out support for that." Along with being able to deconstruct, edit, and reconstruct binaries, the team said its processing pipeline is also able to comb through HARs and remove extraneous routines. The team has also, we're told, baked in verification steps to ensure changes made to code within hardware ranging from jets and drones to plain-old desktop computers work exactly as expected with no side effects.

Firefox

Firefox Finally Outperforming Google Chrome In SunSpider (phoronix.com) 40

Michael Larabel writes via Phoronix: Mozilla developers are celebrating that they are now faster than Google Chrome with the SunSpider JavaScript benchmark, although that test has been superseded by the JetStream benchmark. Last week a new Firefox Nightly News was published that outlines that "We're now apparently beating Chrome on the SunSpider JavaScript benchmark!" The provided numbers now show Firefox easily beating Chrome in this decade-old JavaScript benchmark. The benchmarks come from AreWeFastYet.com. Meanwhile for the newer and more demanding JetStream 2.0 benchmark, Google Chrome continues to win easily over Firefox. You can learn more about the latest Firefox Nightly build advancements via Firefox Nightly News.
AI

Stack Overflow 'Evolves', Previewing AI-Powered Answers and Chat Followups (stackoverflow.blog) 64

"Stack Overflow is adding artificial intelligence to its offerings," reports ZDNet (which notes traffic to the Q&A site has dropped 5% in the last year).

So in a video, Stack Overflow's CEO Prashanth Chandrasekar says that search and question-asking "will evolve to provide you with instant summarized solutions with citations to sources, aggregated by generative AI — plus the option to ask follow-up questions in a chat-like format."

The New Stack provides some context: As computer scientist Santiago Valdarrama remarked in a tweet, "I don't remember the last time I visited Stack Overflow. Why would I when tools like Copilot and ChatGPT answer my questions faster without making me feel bad for asking?" It's a problem Stack Overflow CEO Prashanth Chandrasekar acknowledges because, well, he encountered it too.

"When I first started using Stack Overflow, I remember my first experience was quite harsh, because I basically asked a fairly simple question, but the standard on the website is pretty high," Chandrasekar told The New Stack. "When ChatGPT came out, it was a lot easier for people to go and ask ChatGPT without anybody watching...."

But what may be of more interest to developers is that Stack Overflow is now offering an IDE (integrated development environment) extension for Visual Studio Code that will be powered by OverflowAI. This means that coders will be able to ask a conversational interface a question and find solutions from within the IDE.

Stack Overflow also is launching a GenAI Stack Exchange, where the community can post and share knowledge on prompt engineering, getting the most out of AI and similar topics.

And they're integrating it into other workflows as well. "Of course, AI isn't replacing humans any time soon," CEO Chandrasekar says in the video. "But it can help you draft a question to pose to our community..."

Signups for the OverflowAI preview are available now. "With your help, we'll be putting AI to work," CEO Chandrasekar says in the video.
AI

ChatGPT's Odds of Getting Code Questions Correct are Worse Than a Coin Flip (theregister.com) 119

An anonymous reader shared this report from the Register: ChatGPT, OpenAI's fabulating chatbot, produces wrong answers to software programming questions more than half the time, according to a [pre-print] study from Purdue University. That said, the bot was convincing enough to fool a third of participants.

The Purdue team analyzed ChatGPT's answers to 517 Stack Overflow questions to assess the correctness, consistency, comprehensiveness, and conciseness of ChatGPT's answers. The U.S. academics also conducted linguistic and sentiment analysis of the answers, and questioned a dozen volunteer participants on the results generated by the model. "Our analysis shows that 52 percent of ChatGPT answers are incorrect and 77 percent are verbose," the team's paper concluded. "Nonetheless, ChatGPT answers are still preferred 39.34 percent of the time due to their comprehensiveness and well-articulated language style." Among the set of preferred ChatGPT answers, 77 percent were wrong...

"During our study, we observed that only when the error in the ChatGPT answer is obvious, users can identify the error," their paper stated. "However, when the error is not readily verifiable or requires external IDE or documentation, users often fail to identify the incorrectness or underestimate the degree of error in the answer." Even when the answer has a glaring error, the paper stated, two out of the 12 participants still marked the response preferred. The paper attributes this to ChatGPT's pleasant, authoritative style.

"From semi-structured interviews, it is apparent that polite language, articulated and text-book style answers, comprehensiveness, and affiliation in answers make completely wrong answers seem correct," the paper explained.

Oracle

Oracle, SUSE, and CIQ Go After Red Hat With the Open Enterprise Linux Association (zdnet.com) 70

In a groundbreaking move, CIQ, Oracle, and SUSE have come together to announce the formation of the Open Enterprise Linux Association (OpenELA). From a report: The goal of this new collaborative trade association is to foster "the development of distributions compatible with Red Hat Enterprise Linux (RHEL) by providing open and free enterprise Linux source code."

The inception of OpenELA is a direct response to Red Hat's recent alterations to RHEL source code availability. This new Delaware 501(c)(6) US nonprofit association will provide an open process for organizations to access source code. This will enable it to build RHEL-compatible distributions. The initiative underscores the importance of community-driven source code, which serves as a foundation for creating compatible distributions.

Mike McGrath, Red Hat's vice president of Red Hat Core Platforms, sparked this when he announced Red Hat would be changing how users can access RHEL's source code. For the non-Hatters among you, Core Platforms is the division in charge of RHEL. McGrath wrote, "CentOS Stream will now be the sole repository for public RHEL-related source code releases. For Red Hat customers and partners, source code will remain available via the Red Hat Customer Portal."

This made it much more difficult for RHEL clone vendors, such as AlmaLinux, Rocky Linux, and Oracle Linux, to create perfect RHEL variant distributions. AlmaLinux elected to try to work with Red Hat's new source code rules. Oracle restarted its old fighting ways with IBM/Red Hat; SUSE announced an RHEL-compatible distro fork plan; and Rocky Linux found new ways to obtain RHEL code. Now the last two, along with CIQ, which started Rocky Linux, have joined forces.

Programming

Should a Variable's Type Come After Its Name? (benhoyt.com) 321

Canonical engineering manager Ben Hoyt believes that a variable's name is more important than its type, so "the name should be more prominent and come first in declarations." In many popular programming languages, including C, C++, Java, and C#, when you define a field or variable, you write the type before the name. For example (in C++):

// Struct definition
struct person {
std::string name;
std::string email;
int age;
};


In other languages, including Go, Rust, TypeScript, and Python (with type hints), you write the name before the type. For example (in Go):

// Struct definition
type Person struct {
Name string
Email string
Age int
}

There's a nice answer in the Go FAQ about why Go chose this order: "Why are declarations backwards?". It starts with "they're only backwards if you're used to C", which is a good point — name-before-type has a long history in languages like Pascal. In fact, Go's type declaration syntax (and packages) were directly inspired by Pascal.

The FAQ goes on to point out that parsing is simpler with name-before-type, and declaring multiple variables is less error-prone than in C. In C, the following declares x to be a pointer, but (surprisingly at first!) y to be a normal integer:

int* x, y;

Whereas the equivalent in Go does what you'd expect, declaring both to be pointers:

var x, y *int

The Go blog even has an in-depth article by Rob Pike on Go's Declaration Syntax, which describes more of the advantages of Go's syntax over C's, particularly with arrays and function pointers.

Oddly, the article only hints at what I think is the more important reason to prefer name-before-type for everyday programming: it's clearer.

Hoyt argues a variable's name has more meaning (semantically) — pointing out dynamically-typed languages like Python and Ruby don't even need types, and that languages like Java, Go, C++ and C# now include type inference.

"I think the takeaway is this: we can't change the past, but if you're creating a new language, please put names before types!"
Programming

Do Developers Tend To Scrap Or Ship Their First Drafts? (ntietz.com) 100

Long-time Slashdot reader theodp writes: The necessity of multiple drafts may be an idea that's drilled into children's minds by teachers and parents, but in 2023 there's still a need to remind software engineers to Throw Away Your First Draft of Your Code. "The next time you start on a major project," advises Nicole Tietz-Sokolskaya, "I want you to write code for a couple of days and then delete it all. Just throw it away. I'm serious. And you should probably have some of your best engineers doing this throwaway work. It's going to save you time in the long run."

While Tietz-Sokolskaya's advice echoes that of Ernest Hemingway ("the first draft of anything is shit"), do developers tend to scrap or ship their first drafts in the real world?

Programming

The Most Prolific Packager For Alpine Linux Is Stepping Away (phoronix.com) 37

Michael Larabel, reporting at Phoronix: Alpine Linux remains one of the most popular lightweight Linux distributions built atop musl libc and Busybox. Alpine Linux has found significant use within containers and the embedded space while now sadly the most prolific maintainer of packages for the Linux distribution has decided to step down from her roles. Alice "psykose" who is easily responsible for the highest number of commits per author over the past year has decided to step down from maintaining her packages.

These Alpine aports stats put her at 13,894 commits over the past year. In comparison, the second most prolific packager saw just 2,053 commits... Or put another way, psykose has 6.7x the number of commits as the next packager. The 13.8k commits is also about half of the 26.8k commits seen in total over the past year. Over the weekend I was alerted to the fact that psykose/nekopsykose has begun dropping maintainership of packages she maintained. All of her recent alpinelinux/aports commits two days ago were removing packages she oversaw.

Programming

Salesforce Executive Shares 'Four Ways Coders Can Fight the Climate Crisis' (forbes.com) 79


Saleforce's chief impact officer, writing in Forbes: Code and computer programming — the backbone of modern business — has a long way to go before it can be called "green..." According to a recent report from the science journal Patterns, the information and communication technology sector accounts for up to 3.9% of global emissions... So far, the focus has been on reducing energy consumption in data centers and moving electrical grids away from fossil fuels. Now, coders and designers are ready for a similar push in software, crypto proof of work and AI compute power...

Our research revealed that 75% of UX designers, software developers and IT operations managers want software to do less damage to the environment. Yet nearly one in two don't know how to take action. Half of these technologists admit to not knowing how to mitigate environmental harm in their work, leading to 34% acknowledging that they "rarely or never" consider carbon emissions while typing a new line of code... Earlier this year, Salesforce launched a sustainability guide for technology that provides practical recommendations for aligning climate goals with software development.

In the article the Salesforce executive makes four recommendations, urging coders to design sites in ways that reduce the energy needed to display them. ("Even small changes to image size, color and type options can scale to large impacts.") They also recommend writing application code that uses less energy, which "can lead to significant emissions reductions, particularly when deployed at scale. Leaders can seek out apps that are coded to run natively in browsers which can lead to improvement in performance and a reduction in energy use."

Their article includes links to the energy-saving hackathon GreenHack and the non-profit Green Software Foundation. (Their site recently described how the IT company AVEVA used a Raspberry Pi in back of a hardware cluster as part of a system to measure software's energy consumption.)

But their first recommendation for fighting the climate crisis is "Adopt new technology like AI" to "make the software development cycle more energy efficient." ("At Salesforce, we're starting to see tremendous potential in using generative AI to optimize code and are excited to release this to customers in the future.")
Python

Python's Steering Council Plans to Make Its 'Global Interpreter Lock' Optional (python.org) 21

Python's Global Interpreter Lock "allows only one thread to hold the control of the Python interpreter," according to the tutorial site Real Python. (They add, "it can be a performance bottleneck in CPU-bound and multi-threaded code.")

Friday the Python Steering Council "announced its intent to accept PEP 703 (Making the Global Interpreter Lock Optional in CPython), with initial support possibly showing up in the 3.13 release," reports LWN.net.

From the Steering Council's announcement: It's clear that the overall sentiment is positive, both for the general idea and for PEP 703 specifically. The Steering Council is also largely positive on both. We intend to accept PEP 703, although we're still working on the acceptance details...

Our base assumptions are:

- Long-term (probably 5+ years), the no-GIL build should be the only build. We do not want to create a permanent split between with-GIL and no-GIL builds (and extension modules).

- We want to be very careful with backward compatibility. We do not want another Python 3 situation, so any changes in third-party code needed to accommodate no-GIL builds should just work in with-GIL builds (although backward compatibility with older Python versions will still need to be addressed). This is not Python 4. We are still considering the requirements we want to place on ABI compatibility and other details for the two builds and the effect on backward compatibility.

- Before we commit to switching entirely to the no-GIL build, we need to see community support for it. We can't just flip the default and expect the community to figure out what work they need to do to support it. We, the core devs, need to gain experience with the new build mode and all it entails. We will probably need to figure out new C APIs and Python APIs as we sort out thread safety in existing code. We also need to bring along the rest of the Python community as we gain those insights and make sure the changes we want to make, and the changes we want them to make, are palatable.

- We want to be able to change our mind if it turns out, any time before we make no-GIL the default, that it's just going to be too disruptive for too little gain. Such a decision could mean rolling back all of the work, so until we're certain we want to make no-GIL the default, code specific to no-GIL should be somewhat identifiable.

The current plan is to "add the no-GIL build as an experimental build mode, presumably in 3.13... [A]fter we have confidence that there is enough community support to make production use of no-GIL viable, we make the no-GIL build supported but not the default (yet), and set a target date/Python version for making it the default... We expect this to take at least a year or two, possibly more."

"Long-term, we want no-GIL to be the default, and to remove any vestiges of the GIL (without unnecessarily breaking backward compatibility)... We think it may take as much as five years to get to this stage."
AI

Ask Slashdot: What Happens After Every Programmer is Using AI? (infoworld.com) 127

There's been several articles on how programmers can adapt to writing code with AI. But presumably AI companies will then gather more data from how real-world programmers use their tools.

So long-time Slashdot reader ThePub2000 has a question. "Where's the generative leaps if the humans using it as an assistant don't make leaps forward in a public space?" Let's posit a couple of things:

- First, your AI responses are good enough to use.
- Second, because they're good enough to use you no longer need to post publicly about programming questions.

Where does AI go after it's "perfected itself"?

Or, must we live in a dystopian world where code is scrapable for free, regardless of license, but access to support in an AI from that code comes at a price?

Red Hat Software

RHEL Response Discussed by SFC Conference's Panel - Including a New Enterprise Linux Standard (sfconservancy.org) 66

Last weekend in Portland, Oregon, the Software Freedom Conservancy hosted a new conference called the Free and Open Source Software Yearly.

And long-time free software activist Bradley M. Kuhn (currently a policy fellow/hacker-in-residence for the Software Freedom Conservancy) hosted a lively panel discussion on "the recent change" to public source code releases for Red Hat Enterprise Linux which shed light on what may happen next. The panel also included:
  • benny Vasquez, the Chair of the AlmaLinux OS Foundation
  • Jeremy Alison, Samba co-founder and software engineer at CIQ (focused on Rocky Linux). Allison is also Jeremy Allison - Sam Slashdot reader #8,157.
  • James (Jim) Wright, Oracle's chief architect for Open Source policy/strategy/compliance/alliances

"Red Hat themselves did not reply to our repeated requests to join us on this panel... SUSE was also invited but let us know they were unable to send someone on short notice to Portland for the panel."

One interesting audience question for the panel came from Karsten Wade, a one-time Red Hat senior community architect who left Red Hat in April after 21 years, but said he was "responsible for bringing the CentOS team onboard to Red Hat." Wade argued that CentOS "was always doing a clean rebuild from source RPMS of their own..." So "isn't all of this thunder doing Red Hat's job for them, of trying to get everyone to say, 'This thing is not the equivalent to RHEL.'"

In response Jeremy Alison made a good point. "None of us here are the arbiters of whether it's good enough of a rebuild of Red Hat Linux. The customers are the arbiters." But this led to an audience member asking a very forward-looking question: what are the chances the community could adopt a new (and open) enterprise Linux standard that distributions could follow. AlmaLinux's Vasquez replied, "Chances are real high... I think everyone sees that as the obvious answer. I think that's the obvious next step. I'll leave it at that." And Oracle's Wright added "to the extent that the market asks us to standardize? We're all responsive."

When asked if they'd consider adding features not found in RHEL ("such as high-security gates through reproducible builds") AlmaLinux's Vasquez said "100% -- yeah. One of the things that we're kind of excited about is the opportunities that this opens for us. We had decided we were just going to focus on this north star of 1:1 Red Hat no matter what -- and with that limitation being removed, we have all kinds of options." And CIQ's Alison said "We're working on FIPS certification for an earlier version of Rocky, that Red Hat, I don't believe, FIPS certified. And we're planning to release that."

AlmaLinux's Vasquez emphasized later that "We're just going to build Enterprise Linux. Red Hat has done a great job of establishing a fantastic target for all of us, but they don't own the rights to enterprise Linux. We can make this happen, without forcing an uncomfortable conversation with Red Hat. We can get around this."

And Alison later applied a "Star Wars" quote to Red Hat's predicament. "The more things you try and grab, the more things slip through your fingers." That is, "The more somebody tries to exert control over a codebase, the more the pushback will occur from people who collaborate in that codebase." AlmaLinux's Vasquez also said they're already "in conversations" with independent software vendors about the "flow of support" into non-Red Hat distributions -- though that's always been the case. "Finding ways to reduce the barrier for those independent software vendors to add official support for us is, like, maybe more cumbersome now, but it's the same problem that we've had..."

Early in the discussion Oracle's Jim Wright pointed out that even Red Hat's own web site defines open source code as "designed to be publicly accessible — anyone can see, modify, and distribute the code as they see fit." ("Until now," Wright added pointedly...) There was some mild teasing of Oracle during the 50-minute discussion -- someone asked at one point if they'd re-license their proprietary implementation of ZFS under the GPL. But at the end of the panel, Oracle's Jim Wright still reminded the audience that "If you want to work on open source Linux, we are hiring."

Read Slashdot's transcript of highlights from the discussion.


Programming

Is C++ Gaining in Popularity? (i-programmer.info) 106

An anonymous reader shares this report from Dice.com: C++ is enjoying a surge in popularity, according to the latest update to the TIOBE Index, which tracks programming languages' "buzz."

C++ currently sits right behind C and Python on TIOBE's list. "A few months ago, the programming C++ language claimed position 3 of the TIOBE index (at the expense of Java). But C++ has not finished its rise. C seems to be its next victim," added the note accompanying the data... ["At the moment, the gap between the two is only 0.76%."]

Matlab, Scratch and Rust also match their all time high records at respectively positions #10, #12 and #17.

So here, according to TIOBE, are the 10 most popular programmings languages:

1. Python
2. C
3. C++
4. Java
5. C#
6. JavaScript
7. Visual Basic
8. SQL
9. PHP
10. MATLAB

The site I Programmer digs deeper: C++ was the only one of the top four languages to see a positive year-on-year change in its percentage rating — adding 0.79% to stand at 10.8%. Python had the smallest loss of the entire Top 20, -0.01% leaving it with a share of 13,42% while Visual Basic had the greatest loss at -2.07%. This, combined with JavaScript gaining 1.34%, led to JavaScript overtaking it to occupy #6, its highest ever ranking in the TIOBE Index.
They also note that COBOL "had a 3-month rise going from a share of 0.41% in April to 0.86% in July which moved it into #20 on the index."

Slashdot Top Deals