Programming

'Running Clang in the Browser Using WebAssembly' (wasmer.io) 56

This week (MIT-licensed) WebAssembly runtime Wasmer announced "a major milestone in making any software run with WebAssembly."

The announcement's headline? Running Clang in the browser using WebAssembly... Thanks to the newest release of Wasmer (4.4) and the Wasmer JS SDK (0.8.0) you can now run [compiler front-end] clang anywhere Wasmer runs! This allows compiling C programs from virtually anywhere. Including Javascript and your preferred browser! (we tested Chrome, Safari and Firefox and everything is working like a charm)...

- You can compile C code to WebAssembly easily just using the Wasmer CLI: no toolchains or complex installations needed, install Wasmer and you are ready to go...!

- You can compile C projects directly from JavaScript...!

- We expect online IDEs to start adopting the SDK to allow their users compile and run C programs in the browser....

Do you want to use clang in your Javascript project? Thanks to our newly released Wasmer JS SDK you can do it easily, in both the browser and Node.js/Bun etc... Wasmer's clang can even optimize the file for you automatically using wasm-opt under the hood (Clang automatically detects if wasm-opt is used, and it will be automatically called when optimizing the file). Imagine using Emscripten without needing its toolchain installed — or even better, imagine running Emscripten in the browser.

The announcement looks to a future of compiling native Python libraries, when "any project depending on LLVM can now be easily compiled to WebAssembly..."

"This is the beginning of an awesome journey, we can't wait to see what you create next with this."
AI

80% of Software Engineers Must Upskill For AI Era By 2027, Gartner Warns (itpro.com) 108

80% of software engineers will need to upskill by 2027 to keep pace with generative AI's growing demands, according to Gartner. The consultancy predicts AI will transform the industry in three phases. Initially, AI tools will boost productivity, particularly for senior developers. Subsequently, "AI-native software engineering" will emerge, with most code generated by AI. Long-term, AI engineering will rise as enterprise adoption increases, requiring a new breed of professionals skilled in software engineering, data science, and machine learning.
Python

The Treasurer of Python NZ Pleads Guilty To Stealing From the Society (interest.co.nz) 20

Long-time Slashdot reader Bismillah writes: Python New Zealand has gone through some rough times lately, with its then-treasurer stealing money from the society.. Things were looking really serious for a while, with Python NZ looking at being liquidated due to the theft of funds.

However, there is a silver lining to the story, as the free and open source movement rallied behind Python NZ and got them out of a serious pickle.

"Our friends at Linux Australia and at the Python Software Foundation went well above and beyond to support us, and save us," says Tom Eastman president of Python New Zealand, in an article from interest.co.nz.

He also says he hopes the treasure is ordered by the court to pay restitution. (In the article the treasurer confirms that he's pleaded guilty to the theft, which took place between February 2019 and October 2023 — leaving Python NZ owing conference supplies around $55,000.) "We had $26 in the bank accounts," Eastman tells the site.

The group now has new transparency and accountability measures...
AI

OpenAI Opens Its Speech AI Engine To Developers 7

At its DevDay event today, OpenAI announced that it is giving third-party developers access to its speech-to-speech engine that powers ChatGPT's advanced voice mode. "The move paves the way for a wave of AI apps that offer conversational voice interfaces," reports Axios. From the report: Early testers of the feature include nutrition and fitness app Healthify and Speak, a language learning app. Other new features being made available to developers include the ability to fine tune models based on pictures. In a demo for reporters, OpenAI executives showed an example of the new audio capabilities combined with Twilio's API to allow an AI assistant to call a fictional candy shop and place an order for 400 chocolate covered strawberries.

Developers will only be able to use the voices provided by OpenAI -- the same ones that are options within ChatGPT. While the voice won't be watermarked in any way and developers won't have to make the AI system identify itself, OpenAI says it's against the company's terms of service to use its systems to spam or mislead people.
Firefox

uBlock Origin Lite Maker Ends Firefox Store Support, Slams Mozilla For Hostile Reviews (neowin.net) 50

The Firefox extension for the uBlock Origin Lite content blocker is no longer available. According to Neowin, "Raymond Hill, the maker of the extension, pulled support and moved uBlock Origin Lite to self-hosting after multiple encounters with a 'nonsensical and hostile' review process from the store review team." From the report: It all started in early September when Mozilla flagged every version of the uBlock Origin Lite extension as violating its policies. Reviewers then claimed the extension apparently collected user data and contained "minified, concatenated or otherwise machine-generated code." The developer seemingly debunked those allegations, saying that "it takes only a few seconds for anyone who has even basic understanding of JavaScript to see the raised issues make no sense." Raymond Hill decided to drop the extension from the store and move it to a self-hosted version. This means that those who want to continue using uBlock Origin Lite on Firefox should download the latest version from GitHub (it can auto-update itself).

The last message from the developer in a now-closed GitHub issue shows an email from Mozilla admitting its fault and apologizing for the mistake. However, Raymond still pulled the extension from the Mozilla Add-ons Store, which means you can no longer find it on addons.mozilla.org. It is worth noting that the original uBlock Origin for Firefox is still available and supported.

Programming

Are AI Coding Assistants Really Saving Developers Time? (cio.com) 142

Uplevel provides insights from coding and collaboration data, according to a recent report from CIO magazine — and recently they measured "the time to merge code into a repository [and] the number of pull requests merged" for about 800 developers over a three-month period (comparing the statistics to the previous three months).

Their study "found no significant improvements for developers" using Microsoft's AI-powered coding assistant tool Copilot, according to the article (shared by Slashdot reader snydeq): Use of GitHub Copilot also introduced 41% more bugs, according to the study...

In addition to measuring productivity, the Uplevel study looked at factors in developer burnout, and it found that GitHub Copilot hasn't helped there, either. The amount of working time spent outside of standard hours decreased for both the control group and the test group using the coding tool, but it decreased more when the developers weren't using Copilot.

An Uplevel product manager/data analyst acknowledged to the magazine that there may be other ways to measure developer productivity — but they still consider their metrics solid. "We heard that people are ending up being more reviewers for this code than in the past... You just have to keep a close eye on what is being generated; does it do the thing that you're expecting it to do?"

The article also quotes the CEO of software development firm Gehtsoft, who says they didn't see major productivity gains from LLM-based coding assistants — but did see them introducing errors into code. With different prompts generating different code sections, "It becomes increasingly more challenging to understand and debug the AI-generated code, and troubleshooting becomes so resource-intensive that it is easier to rewrite the code from scratch than fix it."

On the other hand, cloud services provider Innovative Solutions saw significant productivity gains from coding assistants like Claude Dev and GitHub Copilot. And Slashdot reader destined2fail1990 says that while large/complex code bases may not see big gains, "I have seen a notable increase in productivity from using Cursor, the AI powered IDE." Yes, you have to review all the code that it generates, why wouldn't you? But often times it just works. It removes the tedious tasks like querying databases, writing model code, writing forms and processing forms, and a lot more. Some forms can have hundreds of fields and processing those fields along with doing checks for valid input is time consuming, but can be automated effectively using AI.
This prompted an interesting discussion on the original story submission. Slashdot reader bleedingobvious responded: Cursor/Claude are great BUT the code produced is almost never great quality. Even given these tools, the junior/intern teams still cannot outpace the senior devs. Great for learning, maybe, but the productivity angle not quite there.... yet.

It's damned close, though. GIve it 3-6 months.

And Slashdot reader abEeyore posted: I suspect that the results are quite a bit more nuanced than that. I expect that it is, even outside of the mentioned code review, a shift in where and how the time is spent, and not necessarily in how much time is spent.
Agree? Disagree? Share your own experiences in the comments.

And are developers really saving time with AI coding assistants?
AI

Can AI Developers Be Held Liable for Negligence? (lawfaremedia.org) 123

Bryan Choi, an associate professor of law and computer science focusing on software safety, proposes shifting AI liability onto the builders of the systems: To date, most popular approaches to AI safety and accountability have focused on the technological characteristics and risks of AI systems, while averting attention from the workers behind the curtain responsible for designing, implementing, testing, and maintaining such systems...

I have previously argued that a negligence-based approach is needed because it directs legal scrutiny on the actual persons responsible for creating and managing AI systems. A step in that direction is found in California's AI safety bill, which specifies that AI developers shall articulate and implement protocols that embody the "developer's duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm" (emphasis added). Although tech leaders have opposed California's bill, courts don't need to wait for legislation to allow negligence claims against AI developers. But how would negligence work in the AI context, and what downstream effects should AI developers anticipate?

The article suggest two possibilities. Classifying AI developers as ordinary employees leaves employers then sharing liability for negligent acts (giving them "strong incentives to obtain liability insurance policies and to defend their employees against legal claims.") But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies." AI is a field that perhaps uniquely seeks to obscure its human elements in order to magnify its technical wizardry. The virtue of the negligence-based approach is that it centers legal scrutiny back on the conduct of the people who build and hype the technology. To be sure, negligence is limited in key ways and should not be viewed as a complete answer to AI governance. But fault should be the default and the starting point from which all conversations about AI accountability and AI safety begin.
Thanks to long-time Slashdot reader david.emery for sharing the article.
Businesses

Oracle Owns Nearly a Third of Arm Chip House Ampere, Could Take Control In 2027 (theregister.com) 6

The Register's Tobias Mann reports: Oracle could choose to take control of Ampere Computing, the Arm processor designer it has backed and uses in its cloud. A proxy statement [PDF] filed on Wednesday reveals that Oracle held 29 percent stake in Ampere as of May 31, 2024, and has the option to gain majority control over the chip house in 2027. "The total carrying value of our investments in Ampere, after accounting for losses under the equity method of accounting, was $1.5 billion as of May 31, 2024," the filing reads. Oracle also revealed it extended $600 million in loans in the form of convertible debt to Ampere during its 2024 fiscal year, on top of $400 million in debt given during the prior fiscal year. Ampere's debts are set to mature beginning June 2026, when Oracle will have the option of converting those investments into additional equity in the chip startup. "If either of such options is exercised by us or our co-investors, we would obtain control of Ampere and consolidate its results with our results of operations," the filing explains.

According to the document, Oracle spent roughly $48 million on Ampere processors during its 2023 fiscal year -- some of it direct with Ampere and some through a third party. By comparison, Big Red spent just $3 million on Ampere's chips and had $101.1 million worth of products available under a pre-payment order by the end of fiscal year 2024. This is despite the fact that Oracle is aggressively expanding its datacenter footprint to address growing demand for AI infrastructure. These efforts have included the deployment of massive clusters of GPUs from Nvidia and AMD with the largest campus developments nearing a gigawatt in scale. The filing also revealed that Ampere founder and CEO Renee James will not seek re-election to Oracle's board of directors.

Programming

'Compile and Run C in JavaScript', Promises Bun (thenewstack.io) 54

The JavaScript runtime Bun is a Node.js/Deno alternative (that's also a bundler/test runner/package manager).

And Bun 1.1.28 now includes experimental support for ">compiling and running native C from JavaScript, according to this report from The New Stack: "From compression to cryptography to networking to the web browser you're reading this on, the world runs on C," wrote Jarred Sumner, creator of Bun. "If it's not written in C, it speaks the C ABI (C++, Rust, Zig, etc.) and is available as a C library. C and the C ABI are the past, present, and future of systems programming." This is a low-boilerplate way to use C libraries and system libraries from JavaScript, he said, adding that this feature allows the same project that runs JavaScript to also run C without a separate build step... "It's good for glue code that binds C or C-like libraries to JavaScript. Sometimes, you want to use a C library or system API from JavaScript, and that library was never meant to be used from JavaScript," Sumner added.

It's currently possible to achieve this by compiling to WebAssembly or writing a N-API (napi) addon or V8 C++ API library addon, the team explained. But both are suboptimal... WebAssembly can do this but its isolated memory model comes with serious tradeoffs, the team wrote, including an inability to make system calls and a requirement to clone everything. "Modern processors support about 280 TB of addressable memory (48 bits). WebAssembly is 32-bit and can only access its own memory," Sumner wrote. "That means by default, passing strings and binary data JavaScript WebAssembly must clone every time. For many projects, this negates any performance gain from leveraging WebAssembly."

The latest version of Bun, released Friday, builds on this by adding N-API (nap) support to cc [Bun's C compiler, which uses TinyCC to compile the C code]. "This makes it easier to return JavaScript strings, objects, arrays and other non-primitive values from C code," wrote Sumner. "You can continue to use types like int, float, double to send & receive primitive values from C code, but now you can also use N-API types! Also, this works when using dlopen to load shared libraries with bun:ffi (such as Rust or C++ libraries with C ABI exports)....

"TinyCC compiles to decently performant C, but it won't do advanced optimizations that Clang or GCC does like autovectorization or very specialized CPU instructions," Sumner wrote. "You probably won't get much of a performance gain from micro-optimizing small parts of your codebase through C, but happy to be proven wrong!"

Security

CISA Boss: Makers of Insecure Software Are the Real Cyber Villains (theregister.com) 120

Software developers who ship buggy, insecure code are the true baddies in the cyber crime story, Jen Easterly, boss of the US government's Cybersecurity and Infrastructure Security Agency, has argued. From a report: "The truth is: Technology vendors are the characters who are building problems" into their products, which then "open the doors for villains to attack their victims," declared Easterly during a Wednesday keynote address at Mandiant's mWise conference. Easterly also implored the audience to stop "glamorizing" crime gangs with fancy poetic names. How about "Scrawny Nuisance" or "Evil Ferret," Easterly suggested.

Even calling security holes "software vulnerabilities" is too lenient, she added. This phrase "really diffuses responsibility. We should call them 'product defects,'" Easterly said. And instead of automatically blaming victims for failing to patch their products quickly enough, "why don't we ask: Why does software require so many urgent patches? The truth is: We need to demand more of technology vendors."

Python

Microsoft Releases and Patents 'Python In Excel' 67

Longtime Slashdot reader theodp writes: Python in Excel is now generally available for Windows users of Microsoft 365 Business and Enterprise," Microsoft announced in a Monday blog post. "Last August, in partnership with Anaconda, we introduced an exciting new addition to Excel by integrating Python, making it possible to seamlessly combine Python and Excel analytics within the same workbook, no setup required. Since then, we've brought the power of popular Python analytics libraries such as pandas, Matplotlib, and NLTK to countless Excel users." Microsoft also announced the public preview of Copilot in Excel with Python, which will take users' natural language requests for analysis and automatically generate, explain, and insert Python code into Excel spreadsheets.

While drawing criticism for limiting Python execution to locked-down Azure cloud containers, Python in Excel has also earned accolades from the likes of Python creator Guido van Rossum, now a Microsoft Distinguished Engineer, as well as Pandas creator Wes McKinney.

Left unmentioned in Monday's announcement is that Microsoft managed to convince the USPTO to issue it a patent in July 2024 on the Enhanced Integration of Spreadsheets With External Environments (alt. source), which Microsoft explains covers the "implementation of enhanced integrations of native spreadsheet environments with external resources such as-but not limited to-Python." All of which may come as a surprise to software vendors and individuals that were integrating Excel and external programming environments years before Microsoft filed its patent application in September 2022.
AI

Ellison Declares Oracle 'All In' On AI Mass Surveillance 114

Oracle cofounder Larry Ellison envisions AI as the backbone of a new era of mass surveillance, positioning Oracle as a key player in AI infrastructure through its unique networking architecture and partnerships with AWS and Microsoft. The Register reports: Ellison made the comments near the end of an hour-long chat at the Oracle financial analyst meeting last week during a question and answer session in which he painted Oracle as the AI infrastructure player to beat in light of its recent deals with AWS and Microsoft. Many companies, Ellison touted, build AI models at Oracle because of its "unique networking architecture," which dates back to the database era.

"AI is hot, and databases are not," he said, making Oracle's part of the puzzle less sexy, but no less important, at least according to the man himself - AI systems have to have well-organized data, or else they won't be that valuable. The fact that some of the biggest names in cloud computing (and Elon Musk's Grok) have turned to Oracle to run their AI infrastructure means it's clear that Oracle is doing something right, claimed now-CTO Ellison. "If Elon and Satya [Nadella] want to pick us, that's a good sign - we have tech that's valuable and differentiated," Ellison said, adding: One of the ideal uses of that differentiated offering? Maximizing AI's pubic security capabilities.

"The police will be on their best behavior because we're constantly watching and recording everything that's going on," Ellison told analysts. He described police body cameras that were constantly on, with no ability for officers to disable the feed to Oracle. Even requesting privacy for a bathroom break or a meal only meant sections of recording would require a subpoena to view - not that the video feed was ever stopped. AI would be trained to monitor officer feeds for anything untoward, which Ellison said could prevent abuse of police power and save lives. [...] "Citizens will be on their best behavior because we're constantly recording and reporting," Ellison added, though it's not clear what he sees as the source of those recordings - police body cams or publicly placed security cameras. "There are so many opportunities to exploit AI," he said.
Python

Fake Python Coding Tests Installed Malicious Software Packages From North Korea (scmagazine.com) 22

"New malicious software packages tied to the North Korean Lazarus Group were observed posing as a Python coding skills test for developers seeking a new job at Capital One, but were tracked to GitHub projects with embedded malware," reports SC magazine: Researchers at ReversingLabs explained in a September 10 blog post that the scheme was a follow-on to the VMConnect campaign that they first identified in August 2023 in which developers were lured into downloading malicious code via fake job interviews.
More details from The Hacker News These packages, for their part, have been published directly on public repositories like npm and PyPI, or hosted on GitHub repositories under their control. ReversingLabs said it identified malicious code embedded within modified versions of legitimate PyPI libraries such as pyperclip and pyrebase... It's implemented in the form of a Base64-encoded string that obscures a downloader function, which establishes contact with a command-and-control server in order to execute commands received as a response.

In one instance of the coding assignment identified by the software supply chain firm, the threat actors sought to create a false sense of urgency by requiring job seekers to build a Python project shared in the form of a ZIP file within five minutes and find and fix a coding flaw in the next 15 minutes. This makes it "more likely that he or she would execute the package without performing any type of security or even source code review first," Zanki said, adding "that ensures the malicious actors behind this campaign that the embedded malware would be executed on the developer's system."

Tom's Hardware reports that "The capacity for exploitation at that point is pretty much unlimited, due to the flexibility of Python and how it interacts with the underlying OS. This is a good time to refer to PEP 668 which enforces virtual environments for non-system wide Python installs."

More from The Hacker News Some of the aforementioned tests claimed to be a technical interview for financial institutions like Capital One and Rookery Capital Limited, underscoring how the threat actors are impersonating legitimate companies in the sector to pull off the operation. It's currently not clear how widespread these campaigns are, although prospective targets are scouted and contacted using LinkedIn, as recently also highlighted by Google-owned Mandiant.
Programming

The Rust Foundation is Reviewing and Improving Rust's Security (i-programmer.info) 22

The Rust foundation is making "considerable progress" on a complete security audit of the Rust ecosystem, according to the coding news site I Programmer, citing a newly-released report from the nonprofit Rust foundation: The foundation is investigating the development of a Public Key Infrastructure (PKI) model for the Rust language, including the design and implementation for a PKI CA and a resilient Quorum model for the project to implement, and the report says that language updates suggested by members of the Project were nearly ready for implementation.

Following the XZ backdoor vulnerability, the Security Initiative has focused on supply chain security, including work on provenance-tracking, verifying that a given crate is actually associated with the repository it claims to be. The top 5,000 crates by download count have been checked and verified.

Threat modeling has now been completed on the Crates ecosystem. Rust Infrastructure, crates.io and the Rust Project.

Two open source security tools, Painter and Typomania, have been developed and released. Painter can be used to build a graph database of dependencies and invocations between all crates within the crates.io ecosystem, including the ability to obtain 'unsafe' statistics, better call graph pruning, and FFI boundary mapping. Typomania ports typogard to Rust, and can be used to detect potential typosquatting as a reusable library that can be adapted to any registry.

They've also tightened admin privileges for Rust's package registry, according to the article. And "In addition to the work on the Security Initiative, the Foundation has also been working on improving interoperability between Rust and C++, supported by a $1 million contribution from Google."

According to the Rust foundation's technology director, they've made "impressive technical strides and developed new strategies to reinforce the safety, security, and longevity of the Rust programming language." And the director says the new report "paints a clear picture of the impact of our technical projects like the Security Initiative, Safety-Critical Rust Consortium, infrastructure and crates.io support, Interop Initiative, and much more."
Programming

JavaScript, Python, Java: Redmonk's Programming Language Ranking Sees Lack of Change (redmonk.com) 30

Redmonk's latest programming language ranking (attempting to gauge "potential future adoption trends") has found evidence of "a landscape resistant to change." Outside of CSS moving down a spot and C++ moving up one, the Top 10 was unchanged. And even in the back half of the rankings, where languages tend to be less entrenched and movement is more common, only three languages moved at all... There are a few signs of languages following in TypeScript's footsteps and working their way up the path, both in the Top 20 and at the back end of the Top 100 as we'll discuss shortly, but they're the exception that proves the rule.

It's possible that we'll see more fluid usage of languages, and increased usage of code assistants would theoretically make that much more likely, but at this point it's a fairly static status quo. With that, some results of note:

- TypeScript (#6): technically TypeScript didn't move, as it was ranked sixth in our last run, but this is the first quarter in which is has been the sole occupant of that spot. CSS, in this case, dropped one place to seven leaving TypeScript just outside the Top 5. It will be interesting to see whether or not it has more momentum to expend or whether it's topped out for the time being.

- Kotlin (#14) / Scala (#14): both of these JVM-based languages jumped up a couple of spots — two spots in Scala's case and three for Kotlin. Scala's rise is notable because it had been on something of a downward trajectory from a one time high of 12th, and Kotlin's placement is a mild surprise because it had spent three consecutive runs not budging from 17, only to make the jump now. The tie here, meanwhile, is interesting because Scala's long history gives it an accretive advantage over Kotlin's more recent development, but in any case the combination is evidence of the continued staying power of the JVM.

- Objective C (#17): speaking of downward trajectories and the 17th placement on this list, Objective C's slide that began in mid-2018 continued and left the language with its lowest placement in these rankings to date at #17. That's still an enormously impressive achievement, of course, and there are dozens of languages that would trade their usage for Objective C's, but the direction of travel seems clear.

- Dart (#19) / Rust (#19): while once grouped with Kotlin as up and coming languages driven by differing incentives and trends, Dart and Rust have not been able to match the ascent of their counterpart with five straight quarters of no movement. That's not necessarily a negative; as with Objective C, these are still highly popular languages and communities, but it's worth questioning whether new momentum will arrive and from where, particularly because the communities are experiencing some friction in growing their usage.

It's important to remember Redmonk's methodology. "We extract language rankings from GitHub and Stack Overflow, and combine them for a ranking that attempts to reflect both code (GitHub) and discussion (Stack Overflow) traction. The idea is not to offer a statistically valid representation of current usage, but rather to correlate language discussion and usage in an effort to extract insights into potential future adoption trends."

Having said that, here's the current top ten in Redmonk's ranking:
  1. JavaScript
  2. Python
  3. Java
  4. PHP
  5. C#
  6. TypeScript
  7. CSS
  8. C++
  9. Ruby
  10. C

Their announcement also notes that at the other end of the list, the programming language Bicep "jumped eight spots to #78 and Zig 10 to #87. That progress pales next to Ballerina, however, which jumped from #80 to #61 this quarter. The general purpose language from WS02, thus, is added to the list of potential up and comers we're keeping an eye on."


Android

Google Tests Desktop Windowing For Android Tablets (theverge.com) 30

Google is testing a "desktop windowing" feature for Android tablets that "will let you resize apps freely and arrange them on your screen at will," reports The Verge. It's currently available as a developer preview. From the report: Currently, apps on Android tablets open in full-screen by default. When the new mode is enabled, each app will appear in a window with controls that allow you to reposition, maximize, or close the app. You'll also see a taskbar at the bottom of your screen with your running apps. [...] Once the feature is rolled out to everyone, you can turn it on by pressing and holding the window handle at the top of an app's screen. If you have a keyboard attached, you can also use the shortcut meta key (Windows, Command, or Search) + Ctrl + Down to activate desktop mode. (You can exit the mode by closing all your active apps or by dragging a window and dragging it to the top of your screen.)

Google notes that apps locked to portrait orientation are still resizable, which might make things look a bit weird if certain apps aren't optimized. However, Google plans to address this in a future update by scaling the UI of non-resizable apps while maintaining their aspect ratio.

Android

Android Apps Can Now Block Sideloading, Force Downloads Through Google Play (androidauthority.com) 56

Android Authority's Mishaal Rahman reports: There are many reasons why you may want to sideload apps on your Android phone, but there are also good reasons why developers would want to block sideloading. A sideloaded app won't contribute to the developer's Play Store metrics, for one, but it also prevents the developer from curating which devices can use their app. Improperly sideloaded apps can also crash due to missing assets or code, or they might be missing certain features because you installed the wrong version for your device. Whatever the reason may be, developers who want to stop you from sideloading their apps now have an easier way to do so thanks to the Play Integrity API.

The Google Play Integrity API is an interface that helps developers "check that interactions and server requests are coming from [their] genuine app binary running on a genuine Android device." It looks for evidence that the app has been tampered with, that the app is running in an "untrustworthy" software environment, that the device has Google Play Protect enabled, and more. If you've heard of or dealt with SafetyNet Attestation before on a rooted phone, then you're probably already familiar with Play Integrity, even if not by that name. Play Integrity is the successor to SafetyNet Attestation, only it comes with even more features for developers.

As is the case with SafetyNet Attestation, developers call the Play Integrity API at any point in their app, receive what's called an integrity verdict, and then decide what they want to do from there. Some apps call the Play Integrity API when they launch and block access entirely depending on what the verdict is, while others only call the API when you're about to perform a sensitive action, so they can warn you that you shouldn't proceed. The Play Integrity API makes it easy for apps to offload the determination of whether the device and its software environment are "genuine," and with the latest update to the API, apps can now easily determine whether the person who installed them is "genuine" as well.
"As Google continues to bolster Play Integrity's detection mechanisms and add new features, it's going to become harder and harder for power users to justify rooting Android," concludes Rahman. "At the same time, regular users will be better protected from potentially risky and fraudulent interactions, so it's clear that Play Integrity will continue to be adopted by more and more apps."
Oracle

Oracle Is Designing a Data Center That Would Be Powered By Three Small Nuclear Reactors 96

With electricity demand from AI becoming so "crazy," Oracle's Larry Ellison announced the company is designing a data center that will be powered by three small nuclear reactors capable of providing more than a gigawatt of electricity. "The location and the power place we've located, they've already got building permits for three nuclear reactors," Ellison said. "These are the small modular nuclear reactors to power the data center. This is how crazy it's getting. This is what's going on." CNBC reports: Small modular nuclear reactors are new designs that promise to speed the deployment of reliable, carbon-free energy as power demand rises from data centers, manufacturing and the broader electrification of the economy. Generally, these reactors are 300 megawatts or less, about a third the size of the typical reactor in the current U.S. fleet. They would be prefabricated in several pieces and then assembled on the site, reducing the capital costs that stymie larger plants.

Right now, small modular reactors are a technology of the future, with executives in the nuclear industry generally agreeing that they won't be commercialized in the U.S. until the 2030s. There are currently three operational small modular reactors in the world, according to the Nuclear Energy Agency. Two are in China and Russia, the central geopolitical adversaries of the U.S. A test reactor is also operational in Japan.
Oracle

'Oracle's Missteps in Cloud Computing Are Paying Dividends in AI' (msn.com) 26

Oracle missed the tech industry's move to cloud computing last decade and ended up an also-ran. Now the AI boom has given it another shot. WSJ: The 47-year-old company that made its name on relational database software has emerged as an attractive cloud-computing provider for AI developers such as OpenAI, sending its long-stagnant stock to new heights. Oracle shares are up 34% since January, well outpacing the Nasdaq's 14% rise and those of bigger competitors Microsoft, Amazon.com and Google.

It is a surprising revitalization for a company many in the tech industry had dismissed as a dinosaur of a bygone, precloud era. Oracle appears to be successfully making a case to investors that it has become a strong fourth-place player in a cloud market surging thanks to AI. Its lateness to the game may have played to its advantage, as a number of its 162 data centers were built in recent years and are designed for the development of AI models, known as training.

In addition, Oracle isn't developing its own large AI models that compete with potential clients. The company is considered such a neutral and unthreatening player that it now has partnerships with Microsoft, Google and Amazon, all of which let Oracle's databases run in their clouds. Microsoft is also running its Bing AI chatbot on Oracle's servers.

Programming

Two Android Engineers Explain How They Extended Rust In Android's Firmware (theregister.com) 62

The Register reports that Google "recently rewrote the firmware for protected virtual machines in its Android Virtualization Framework using the Rust programming language." And they add that Google "wants you to do the same, assuming you deal with firmware."

A post on Google's security blog by Android engineers Ivan Lozano and Dominik Maier promises to show "how to gradually introduce Rust into your existing firmware," adding "You'll see how easy it is to boost security with drop-in Rust replacements, and we'll even demonstrate how the Rust toolchain can handle specialized bare-metal targets."

This prompts the Register to quip that easy "is not a term commonly heard with regard to a programming language known for its steep learning curve." Citing the lack of high-level security mechanisms in firmware, which is often written in memory-unsafe languages such as C or C++, Lozano and Maier argue that Rust provides a way to avoid the memory safety bugs like buffer overflows and use-after-free that account for the majority of significant vulnerabilities in large codebases. "Rust provides a memory-safe alternative to C and C++ with comparable performance and code size," they note. "Additionally it supports interoperability with C with no overhead."
At one point the blog post explains that "You can replace existing C functionality by writing a thin Rust shim that translates between an existing Rust API and the C API the codebase expects." But their ultimate motivation is greater security. "Android's use of safe-by-design principles drives our adoption of memory-safe languages like Rust, making exploitation of the OS increasingly difficult with every release."

And the Register also got this quote from Lars Bergstrom, Google's director of engineering for Android Programming Languages (and chair of the Rust Foundation's board of directors). "At Google, we're increasing Rust's use across Android, Chromium, and more to reduce memory safety vulnerabilities. We're dedicated to collaborating with the Rust ecosystem to drive its adoption and provide developers with the resources and training they need to succeed.

"This work on bringing Rust to embedded and firmware addresses another critical part of the stack."

Slashdot Top Deals