×
Communications

The US Government Makes a $42 Million Bet On Open Cell Networks (theverge.com) 26

An anonymous reader quotes a report from The Verge: The US government has committed $42 million to further the development of the 5G Open RAN (O-RAN) standard that would allow wireless providers to mix and match cellular hardware and software, opening up a bigger market for third-party equipment that's cheaper and interoperable. The National Telecommunications and Information Administration (NTIA) grant would establish a Dallas O-RAN testing center to prove the standard's viability as a way to head off Huawei's steady cruise toward a global cellular network hardware monopoly.

Verizon global network and technology president Joe Russo promoted the funding as a way to achieve "faster innovation in an open environment." To achieve the standard's goals, AT&T vice president of RAN technology Robert Soni says that AT&T and Verizon have formed the Acceleration of Compatibility and Commercialization for Open RAN Deployments Consortium (ACCoRD), which includes a grab bag of wireless technology companies like Ericsson, Nokia, Samsung, Dell, Intel, Broadcom, and Rakuten. Japanese wireless carrier Rakuten formed as the first O-RAN network in 2020. The company's then CEO, Tareq Amin, told The Verge's Nilay Patel in 2022 that Open RAN would enable low-cost network build-outs using smaller equipment rather than massive towers -- which has long been part of the promise of 5G.

But O-RAN is about more than that; establishing interoperability means companies like Verizon and AT&T wouldn't be forced to buy all of their hardware from a single company to create a functional network. For the rest of us, that means faster build-outs and "more agile networks," according to Rakuten. In the US, Dish has been working on its own O-RAN network, under the name Project Genesis. The 5G network was creaky and unreliable when former Verge staffer Mitchell Clarke tried it out in Las Vegas in 2022, but the company said in June last year that it had made its goal of covering 70 percent of the US population. Dish has struggled to become the next big cell provider in the US, though -- leading satellite communications company EchoStar, which spun off from Dish in 2008, to purchase the company in January.
The Washington Post writes that O-RAN "is Washington's anointed champion to try to unseat the Chinese tech giant Huawei Technologies" as the world's biggest supplier of cellular infrastructure gear.

According to the Post, Biden has emphasized the importance of O-RAN in conversations with international leaders over the past few years. Additionally, it notes that Congress along with the NTIA have dedicated approximately $2 billion to support the development of this standard.
Microsoft

Microsoft Working On Its Own DLSS-like Upscaler for Windows 11 (theverge.com) 42

Microsoft appears to be readying its own DLSS-like AI upscaling feature for PC games. From a report: X user PhantomOcean3 discovered the feature inside the latest test versions of Windows 11 over the weekend, with Microsoft describing its automatic super resolution as a way to "use AI to make supported games play more smoothly with enhanced details." That sounds a lot like Nvidia's Deep Learning Super Sampling (DLSS) technology, which uses AI to upscale games and improve frame rates and image quality. AMD and Intel also offer their own variants, with FSR and XeSS both growing in popularity in recent PC game releases.
AI

AI PCs To Account for Nearly 60% of All PC Shipments by 2027, IDC Says (idc.com) 70

IDC, in a press release: A new forecast from IDC shows shipments of artificial intelligence (AI) PCs -- personal computers with specific system-on-a-chip (SoC) capabilities designed to run generative AI tasks locally -- growing from nearly 50 million units in 2024 to more than 167 million in 2027. By the end of the forecast, IDC expects AI PCs will represent nearly 60% of all PC shipments worldwide. [...] Until recently, running an AI task locally on a PC was done on the central processing unit (CPU), the graphics processing unit (GPU), or a combination of the two. However, this can have a negative impact on the PC's performance and battery life because these chips are not optimized to run AI efficiently. PC silicon vendors have now introduced AI-specific silicon to their SoCs called neural processing units (NPUs) that run these tasks more efficiently.

To date, IDC has identified three types of NPU-enabled AI PCs:
1. Hardware-enabled AI PCs include an NPU that offers less than 40 tera operations per second (TOPS) performance and typically enables specific AI features within apps to run locally. Qualcomm, Apple, AMD, and Intel are all shipping chips in this category today.

2. Next-generation AI PCs include an NPU with 40 to 60 TOPS performance and an AI-first operating system (OS) that enables persistent and pervasive AI capabilities in the OS and apps. Qualcomm, AMD, and Intel have all announced future chips for this category, with delivery expected to begin in 2024. Microsoft is expected to roll out major updates (and updated system specifications) to Windows 11 to take advantage of these high-TOPS NPUs.

3. Advanced AI PCs are PCs that offer more than 60 TOPS of NPU performance. While no silicon vendors have announced such products, IDC expects them to appear in the coming years. This IDC forecast does not include advanced AI PCs, but they will be incorporated into future updates.
Michael Dell, commenting on X: This is correct and might be underestimating it. AI PCs are coming fast and Dell is ready.
United States

US Says Leading AI Companies Join Safety Consortium To Address Risks (reuters.com) 6

The Biden administration on Thursday said leading AI companies are among more than 200 entities joining a new U.S. consortium to support the safe development and deployment of generative AI. From a report: Commerce Secretary Gina Raimondo announced the U.S. AI Safety Institute Consortium (AISIC), which includes OpenAI, Alphabet's Google, Anthropic and Microsoft along with Facebook-parent Meta Platforms, Apple, Amazon, Nvidia, Palantir, Intel, JPMorgan Chase and Bank of America. "The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence," Raimondo said in a statement.
AI

Intel Delays $20 Billion Ohio Project, Citing Slow Chip Market (reuters.com) 41

An anonymous reader quotes a report from Reuters: Intel is delaying the construction timeline for its $20 billion chipmaking project in Ohio amid market challenges and the slow rollout of U.S. grant money, the Wall Street Journal reported on Thursday. Its initial timeline had chip-making starting next year. Construction on the manufacturing facilities now is not expected to be finished until late 2026, the report said, citing people involved in the project. Shares of the chipmaker were last down 1.5% in extended trading.

"We are fully committed to completing the project, and construction is continuing. We have made a lot of progress in the last year," an Intel spokesperson said, adding that managing large-scale projects often involves changing timelines. Uncertain demand for its chips used in the traditional server and personal computer markets had led the company to forecast revenue for the first quarter below market estimates late last month. This came as a shift in spending to AI data servers, dominated by rivals Nvidia and aspiring AI competitor Advanced Micro Devices sapped demand for traditional server chips -- Intel's core data center offering.

Networking

Ceph: a Journey To 1 TiB/s (ceph.io) 16

It's "a free and open-source, software-defined storage platform," according to Wikipedia, providing object storage, block storage, and file storage "built on a common distributed cluster foundation". The charter advisory board for Ceph included people from Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE.

And Nite_Hawk (Slashdot reader #1,304) is one of its core engineers — a former Red Hat principal software engineer named Mark Nelson. (He's now leading R&D for a small cloud systems company called Clyso that provides Ceph consulting.) And he's returned to Slashdot to share a blog post describing "a journey to 1 TiB/s". This gnarly tale-from-Production starts while assisting Clyso with "a fairly hip and cutting edge company that wanted to transition their HDD-backed Ceph cluster to a 10 petabyte NVMe deployment" using object-based storage devices [or OSDs]...) I can't believe they figured it out first. That was the thought going through my head back in mid-December after several weeks of 12-hour days debugging why this cluster was slow... Half-forgotten superstitions from the 90s about appeasing SCSI gods flitted through my consciousness...

Ultimately they decided to go with a Dell architecture we designed, which quoted at roughly 13% cheaper than the original configuration despite having several key advantages. The new configuration has less memory per OSD (still comfortably 12GiB each), but faster memory throughput. It also provides more aggregate CPU resources, significantly more aggregate network throughput, a simpler single-socket configuration, and utilizes the newest generation of AMD processors and DDR5 RAM. By employing smaller nodes, we halved the impact of a node failure on cluster recovery....

The initial single-OSD test looked fantastic for large reads and writes and showed nearly the same throughput we saw when running FIO tests directly against the drives. As soon as we ran the 8-OSD test, however, we observed a performance drop. Subsequent single-OSD tests continued to perform poorly until several hours later when they recovered. So long as a multi-OSD test was not introduced, performance remained high. Confusingly, we were unable to invoke the same behavior when running FIO tests directly against the drives. Just as confusing, we saw that during the 8 OSD test, a single OSD would use significantly more CPU than the others. A wallclock profile of the OSD under load showed significant time spent in io_submit, which is what we typically see when the kernel starts blocking because a drive's queue becomes full...

For over a week, we looked at everything from bios settings, NVMe multipath, low-level NVMe debugging, changing kernel/Ubuntu versions, and checking every single kernel, OS, and Ceph setting we could think of. None these things fully resolved the issue. We even performed blktrace and iowatcher analysis during "good" and "bad" single OSD tests, and could directly observe the slow IO completion behavior. At this point, we started getting the hardware vendors involved. Ultimately it turned out to be unnecessary. There was one minor, and two major fixes that got things back on track.

It's a long blog post, but here's where it ends up:
  • Fix One: "Ceph is incredibly sensitive to latency introduced by CPU c-state transitions. A quick check of the bios on these nodes showed that they weren't running in maximum performance mode which disables c-states."
  • Fix Two: [A very clever engineer working for the customer] "ran a perf profile during a bad run and made a very astute discovery: A huge amount of time is spent in the kernel contending on a spin lock while updating the IOMMU mappings. He disabled IOMMU in the kernel and immediately saw a huge increase in performance during the 8-node tests." In a comment below, Nelson adds that "We've never seen the IOMMU issue before with Ceph... I'm hoping we can work with the vendors to understand better what's going on and get it fixed without having to completely disable IOMMU."
  • Fix Three: "We were not, in fact, building RocksDB with the correct compile flags... It turns out that Canonical fixed this for their own builds as did Gentoo after seeing the note I wrote in do_cmake.sh over 6 years ago... With the issue understood, we built custom 17.2.7 packages with a fix in place. Compaction time dropped by around 3X and 4K random write performance doubled."

The story has a happy ending, with performance testing eventually showing data being read at 635 GiB/s — and a colleague daring them to attempt 1 TiB/s. They built a new testing configuration targeting 63 nodes — achieving 950GiB/s — then tried some more performance optimizations...


Security

A Flaw In Millions of Apple, AMD, and Qualcomm GPUs Could Expose AI Data (wired.com) 22

An anonymous reader quotes a report from Wired: As more companies ramp up development of artificial intelligence systems, they are increasingly turning to graphics processing unit (GPU) chips for the computing power they need to run large language models (LLMs) and to crunch data quickly at massive scale. Between video game processing and AI, demand for GPUs has never been higher, and chipmakers are rushing to bolster supply. In new findings released today, though, researchers are highlighting a vulnerability in multiple brands and models of mainstream GPUs -- including Apple, Qualcomm, and AMD chips -- that could allow an attacker to steal large quantities of data from a GPU's memory. The silicon industry has spent years refining the security of central processing units, or CPUs, so they don't leak data in memory even when they are built to optimize for speed. However, since GPUs were designed for raw graphics processing power, they haven't been architected to the same degree with data privacy as a priority. As generative AI and other machine learning applications expand the uses of these chips, though, researchers from New York -- based security firm Trail of Bits say that vulnerabilities in GPUs are an increasingly urgent concern. "There is a broader security concern about these GPUs not being as secure as they should be and leaking a significant amount of data," Heidy Khlaaf, Trail of Bits' engineering director for AI and machine learning assurance, tells WIRED. "We're looking at anywhere from 5 megabytes to 180 megabytes. In the CPU world, even a bit is too much to reveal."

To exploit the vulnerability, which the researchers call LeftoverLocals, attackers would need to already have established some amount of operating system access on a target's device. Modern computers and servers are specifically designed to silo data so multiple users can share the same processing resources without being able to access each others' data. But a LeftoverLocals attack breaks down these walls. Exploiting the vulnerability would allow a hacker to exfiltrate data they shouldn't be able to access from the local memory of vulnerable GPUs, exposing whatever data happens to be there for the taking, which could include queries and responses generated by LLMs as well as the weights driving the response. In their proof of concept, as seen in the GIF below, the researchers demonstrate an attack where a target -- shown on the left -- asks the open source LLM Llama.cpp to provide details about WIRED magazine. Within seconds, the attacker's device -- shown on the right -- collects the majority of the response provided by the LLM by carrying out a LeftoverLocals attack on vulnerable GPU memory. The attack program the researchers created uses less than 10 lines of code. [...] Though exploiting the vulnerability would require some amount of existing access to targets' devices, the potential implications are significant given that it is common for highly motivated attackers to carry out hacks by chaining multiple vulnerabilities together. Furthermore, establishing "initial access" to a device is already necessary for many common types of digital attacks.
The researchers did not find evidence that Nvidia, Intel, or Arm GPUs contain the LeftoverLocals vulnerability, but Apple, Qualcomm, and AMD all confirmed to WIRED that they are impacted. Here's what each of the affected companies had to say about the vulnerability, as reported by Wired:

Apple: An Apple spokesperson acknowledged LeftoverLocals and noted that the company shipped fixes with its latest M3 and A17 processors, which it unveiled at the end of 2023. This means that the vulnerability is seemingly still present in millions of existing iPhones, iPads, and MacBooks that depend on previous generations of Apple silicon. On January 10, the Trail of Bits researchers retested the vulnerability on a number of Apple devices. They found that Apple's M2 MacBook Air was still vulnerable, but the iPad Air 3rd generation A12 appeared to have been patched.
Qualcomm: A Qualcomm spokesperson told WIRED that the company is "in the process" of providing security updates to its customers, adding, "We encourage end users to apply security updates as they become available from their device makers." The Trail of Bits researchers say Qualcomm confirmed it has released firmware patches for the vulnerability.
AMD: AMD released a security advisory on Wednesday detailing its plans to offer fixes for LeftoverLocals. The protections will be "optional mitigations" released in March.
Google: For its part, Google says in a statement that it "is aware of this vulnerability impacting AMD, Apple, and Qualcomm GPUs. Google has released fixes for ChromeOS devices with impacted AMD and Qualcomm GPUs."
AI

CES PC Makers Bet on AI To Rekindle Sales (reuters.com) 15

PC and microchip companies struggling to get consumers to replace pandemic-era laptops offered a new feature to crowds this week at CES: AI. From a report: PC and chipmakers including AMD and Intel are betting that the so-called "neural processing units" now found in the latest chip designs will encourage consumers to once again pay for higher-end laptops. Adding additional AI capabilities could help take market share from Apple. "The conversations I'm having with customers are about 'how do I get my PC ready for what I think is coming in AI and going to be able to deliver,'" said Sam Burd, Dell Technologies' president of its PC business. Chipmakers built the NPU blocks because they can achieve a high level of performance for AI functions with relatively modest power needs. Today there are few applications that might take full advantage of the new capabilities, but more are coming, said David McAfee, corporate vice president and general manager of the client channel business at AMD.

Among the few applications that can take advantage of such chips is the creative suite of software produced by Adobe. Intel hosted an "open house" where a handful of PC vendors showed off their latest laptops with demos designed to put the new capabilities on display. Machines from the likes of Dell and Lenovo were arrayed inside one of the cavernous ballrooms at the Venetian Convention Center on Las Vegas Boulevard.

IT

Asus' New Laptop Has Two Screens and a Removable Keyboard (theverge.com) 19

Asus is back with another Zenbook Duo, the latest $2,161 device in its range of dual-screened laptops. But rather than including a small secondary display above this laptop's keyboard like previous Duos, the revamped version for 2024 has two equally sized 14-inch screens. The Verge has more: They're both OLED, with resolutions of up to 2880 x 1800, aspect ratios of 16:10, and a maximum refresh rate of 120Hz. Between them, they offer a total of 19.8 inches of usable screen real estate. It's a similar approach to the one Lenovo took with last year's dual-screen Yoga Book 9i, albeit with a couple of tweaks. Like Lenovo, Asus gives you a choice of typing on the lower touchscreen via a virtual keyboard or by using a detachable physical Bluetooth keyboard. But what's different here is that Asus' keyboard has a trackpad built in, so you don't have to use it in combination with an on-screen trackpad.

Asus envisages you using the new Zenbook Duo in a few different configurations. There's a standard laptop mode, where the bottom screen is entirely covered by a traditional keyboard and trackpad. Or you can rest the keyboard on your desk and have the two screens arranged vertically for "Dual Screen" mode or horizontally for "Desktop" mode. Finally, there's "Sharing" mode, which has you ditch the keyboard entirely and lay the laptop down on a flat surface with both its screens facing up and away from each other, presumably so you can share your work with a colleague sitting across the desk from you. Naturally, having launched a year later than its competitor, the Asus Zenbook Duo is also packed with more modern hardware. It can be specced with up to an Intel Core Ultra 9 185H processor and 32GB of RAM, up to 2TB of storage, and a 75Wh battery. Connectivity includes two Thunderbolt 4 ports, a USB-A port, HDMI out, and a 3.5mm jack, and the laptop can be used with Asus' stylus.

Wireless Networking

Wi-Fi 7 is Ready To Go Mainstream (androidcentral.com) 28

The Wi-Fi Alliance is now starting to certify devices that use the latest generation of wireless connectivity, and the goal is to make sure these devices work with each other seamlessly. Android Central: Basically, the certification allows router brands and device manufacturers to guarantee that their products will work with other Wi-Fi 7 devices. Qualcomm, for its part, is announcing that it has several designs that leverage Wi-Fi 7, and that it achieved the Wi-Fi Alliance certification -- dubbed Wi-Fi Certified 7 -- for the FastConnect 7800 module that's baked into the Snapdragon 8 Gen 3 and 8 Gen 2, and the Networking Pro portfolio.

Wi-Fi Certified 7 is designed to enable interoperability, and ensure that devices from various brands work without any issues. In addition to Qualcomm, the likes of MediaTek, Intel, Broadcom, CommScope, and MaxLinear are also picking up certifications for their latest networking products. I chatted with Andy Davidson, Sr. Director of Technology Planning at Qualcomm, ahead of the announcement to understand a little more about how Wi-Fi 7 is different. Wi-Fi 7 uses the 6GHz band -- similar to Wi-Fi 6E -- but introduces 320Mhz channels that have the potential to deliver significantly greater bandwidth. Wi-Fi 7 also uses a clever new feature called Multi-Link Operation (MLO) that lets devices connect to two bands at the same time, leading to better signal strength and bandwidth.
Further reading: Wi-Fi 7 Signals the Industry's New Priority: Stability.
Hardware

Oldest-Known Version of MS-DOS's Predecessor Discovered (arstechnica.com) 70

An anonymous reader quotes a report from The Guardian: Microsoft's MS-DOS (and its IBM-branded counterpart, PC DOS) eventually became software juggernauts, powering the vast majority of PCs throughout the '80s and serving as the underpinnings of Windows throughout the '90s. But the software had humble beginnings, as we've detailed in our history of the IBM PC and elsewhere. It began in mid-1980 as QDOS, or "Quick and Dirty Operating System," the work of developer Tim Paterson at a company called Seattle Computer Products (SCP). It was later renamed 86-DOS, after the Intel 8086 processor, and this was the version that Microsoft licensed and eventually purchased.

Last week, Internet Archive user f15sim discovered and uploaded a new-old version of 86-DOS to the Internet Archive. Version 0.1-C of 86-DOS is available for download here and can be run using the SIMH emulator; before this, the earliest extant version of 86-DOS was version 0.34, also uploaded by f15sim. This version of 86-DOS is rudimentary even by the standards of early-'80s-era DOS builds and includes just a handful of utilities, a text-based chess game, and documentation for said chess game. But as early as it is, it remains essentially recognizable as the DOS that would go on to take over the entire PC business. If you're just interested in screenshots, some have been posted by user NTDEV on the site that used to be Twitter.

According to the version history available on Wikipedia, this build of 86-DOS would date back to roughly August of 1980, shortly after it lost the "QDOS" moniker. By late 1980, SCP was sharing version 0.3x of the software with Microsoft, and by early 1981, it was being developed as the primary operating system of the then-secret IBM Personal Computer. By the middle of 1981, roughly a year after 86-DOS began life as QDOS, Microsoft had purchased the software outright and renamed it MS-DOS. Microsoft and IBM continued to co-develop MS-DOS for many years; the version IBM licensed and sold on its PCs was called PC DOS, though for most of their history the two products were identical. Microsoft also retained the ability to license the software to other computer manufacturers as MS-DOS, which contributed to the rise of a market of mostly interoperable PC clones. The PC market as we know it today still more or less resembles the PC-compatible market of the mid-to-late 1980s, albeit with dramatically faster and more capable components.

Desktops (Apple)

Inside Apple's Massive Push To Transform the Mac Into a Gaming Paradise (inverse.com) 144

Apple is reinvesting in gaming with advanced Mac hardware, improvements to Apple silicon, and gaming-focused software, aiming not to repeat its past mistakes and capture a larger share of the gaming market. In an article for Inverse, Raymond Wong provides an in-depth overview of this endeavor, including commentary from Apple's marketing managers Gordon Keppel, Leland Martin, and Doug Brooks. Here's an excerpt from the report: Gaming on the Mac in the 1990s until 2020, when Apple made a big shift to its own custom silicon, could be boiled down to this: Apple was in a hardware arms race with the PC that it couldn't win. Mac gamers were hopeful that the switch from PowerPC to Intel CPUs starting in 2005 would turn things around, but it didn't because by then, GPUs started becoming the more important hardware component for running 3D games, and the Mac's support for third-party GPUs could only be described as lackluster. Fast forward to 2023, and Apple has a renewed interest in gaming on the Mac, the likes of which it hasn't shown in the last 25 years. "Apple silicon has changed all that," Keppel tells Inverse. "Now, every Mac that ships with Apple silicon can play AAA games pretty fantastically. Apple silicon has been transformative of our mainstream systems that got tremendous boosts in graphics with M1, M2, and now with M3."

Ask any gadget reviewer (including myself) and they will tell you Keppel isn't just drinking the Kool-Aid because Apple pays him to. Macs with Apple silicon really are performant computers that can play some of the latest PC and console games. In three generations of desktop-class chip design, Apple has created a platform with "tens of millions of Apple silicon Macs," according to Keppel. That's tens of millions of Macs with monstrous CPU and GPU capabilities for running graphics-intensive games. Apple's upgrades to the GPUs on its silicon are especially impressive. The latest Apple silicon, the M3 family of chips, supports hardware-accelerated ray-tracing and mesh shading, features that only a few years ago didn't seem like they would ever be a priority, let alone ones that are built into the entire spectrum of MacBook Pros.

The "magic" of Apple silicon isn't just performance, says Leland Martin, an Apple software marketing manager. Whereas Apple's fallout with game developers on the Mac previously came down to not supporting specific computer hardware, Martin says Apple silicon started fresh with a unified hardware platform that not only makes it easier for developers to create Mac games for, but will allow for those games to run on other Apple devices. "If you look at the Mac lineup just a few years ago, there was a mix of both integrated and discrete GPUs," Martin says. "That can add complexity when you're developing games. Because you have multiple different hardware permutations to consider. Today, we've effectively eliminated that completely with Apple silicon, creating a unified gaming platform now across iPhone, iPad, and Mac. Once a game is designed for one platform, it's a straightforward process to bring it to the other two. We're seeing this play out with games like Resident Evil Village that launched first [on Mac] followed by iPhone and iPad."

"Gaming was fundamentally part of the Apple silicon design,â Doug Brooks, also on the Mac product marketing team, tells Inverse. "Before a chip even exists, gaming is fundamentally incorporated during those early planning stages and then throughout development. I think, big picture, when we design our chips, we really look at building balanced systems that provide great CPU, GPU, and memory performance. Of course, [games] need powerful GPUs, but they need all of those features, and our chips are designed to deliver on that goal. If you look at the chips that go in the latest consoles, they look a lot like that with integrated CPU, GPU, and memory." [...] "One thing we're excited about with this most recent launch of the M3 family of chips is that we're able to bring these powerful new technologies, Dynamic Caching, as well as ray-tracing and mesh shading across our entire line of chips," Brook adds. "We didn't start at the high end and trickle them down over time. We really wanted to bring that to as many customers as possible."

Microsoft

Microsoft Readies 'Next-Gen' AI-Focused PCs (windowscentral.com) 23

Microsoft is working on significant updates to its Surface Pro and Surface Laptop lines. According to Windows Central, new devices "will be announced in the spring and will be marketed as Microsoft's first true next-gen AI PCs." From the report: For the first time, both Surface Pro and Surface Laptop will be available in Intel and Arm flavors, and both will have next-gen NPU (neural processing unit) silicon. Sources are particularly excited about the Arm variants, which I understand will be powered by a custom version of Qualcomm's new Snapdragon X Series chips. Internally, Microsoft is calling next-generation Arm devices powered by Qualcomm's new chips "CADMUS" PCs. These PCs are purpose-built for the next version of Windows, codenamed Hudson Valley, and will utilize many of the upcoming next-gen AI experiences Microsoft is building into the 2024 release of Windows. Specifically, Microsoft touts CADMUS PCs as being genuinely competitive with Apple Silicon, sporting similar battery life, performance, and security. The next Surface Pro and Surface Laptop are expected to be some of the first CADMUS PCs to ship next year in preparation for the Hudson Valley release coming later in 2024.

So, what's changing with the Surface Laptop 6? I'm told this new Surface Laptop will finally have an updated design with thinner bezels, rounded display corners, and more ports. This will be the first time that Microsoft's Surface Laptop line is getting a design refresh, which is well overdue. The Surface Laptop 6 will again be available in two sizes. However, I'm told the smaller model will have a slightly larger 13.8-inch display, up from 13.5 inches on the Surface Laptop 5. Sources say the larger model remains at 15-inches. I'm told Surface Laptop 6 will also have an expanded selection of ports, including two USB-C ports and one USB-A port, along with the magnetic Surface Connect charging port. Microsoft is also adding a haptic touchpad (likely with Sensel technology) and a dedicated Copilot button on the keyboard deck for quick access to Windows Copilot.

The next Surface Pro is also shaping into a big update, although not as drastic as the Surface Laptop 6. According to my sources, the most significant changes coming to Surface Pro 10 are mostly related to its display, which sources say is now brighter with support for HDR content, has a new anti-reflective coating to reduce glare, and now also sports rounded display corners. I've also heard that Microsoft is testing a version of Surface Pro 10 with a slightly lower-resolution 2160 x 1440 display, down from the 2880 x 1920 screen found on previous Surface Pro models. Sources say this lower-resolution panel is only being considered for lower-tier models, meaning the more expensive models will continue to ship with the higher-resolution display. Lastly, I also hear Microsoft is equipping the next Surface Pro with an NFC reader for commercial customers and a wider FoV webcam, which will be enhanced with Windows Studio Effects. It should also be available in new colors. I've also heard we may get an updated Type Cover accessory with a dedicated Copilot button for quick access to Windows Copilot.

Intel

12VO Power Standard Appears To Be Gaining Steam, Will Reduce PC Cables and Costs (tomshardware.com) 79

An anonymous reader quotes a report from Tom's Hardware: The 12VO power standard (PDF), developed by Intel, is designed to reduce the number of power cables needed to power a modern PC, ultimately reducing cost. While industry uptake of the standard has been slow, a new slew of products from MSI indicates that 12VO is gaining traction.

MSI is gearing up with two 12VO-compliant motherboards, covering both Intel and AMD platforms, and a 12VO Power Supply that it's releasing simultaneously: The Pro B650 12VO WiFi, Pro H610M 12VO, and MSI 12VO PSU power supply are all 'coming soon,' which presumably means they'll officially launch at CES 2024. HardwareLux got a pretty good look at MSI's offerings during its EHA (European Hardware Awards) tech tour, including the 'Project Zero' we covered earlier. One of the noticeable changes is the absence of a 24-pin ATX connector, as the ATX12VO connectors use only ten-pin connectors. The publications also saw a 12VO-compliant FSP power supply in a compact system with a thick graphics card.

A couple of years ago, we reported on FSP 650-watt and 750-watt SFX 12VO power supply. Apart from that, there is a 1x 6-pin ATX12VO termed 'extra board connector' according to its manual and a 1x 8-pin 12V power connector for the CPU. There are two smaller 4-pin connectors that will provide the 5V power needed for SATA drives. It is likely each of these connectors provides power to two SATA-based drives. Intel proposed the ATX12VO standard several years ago, but adoption has been slow until now. This standard is designed to provide 12v exclusively, completely removing a direct 3.3v and 5v supply. The success of the new standard will depend on the wide availability of the motherboard and power supplies.

Intel

Intel To Invest $25 Billion in Israel After Winning Incentives (bloomberg.com) 150

Intel confirmed it will invest a total of $25 billion in Israel after securing $3.2 billion in incentives from the country's government. From a report: The outlay, announced by the Israeli government in June and unconfirmed by Intel until now, will go toward an expansion of the company's wafer fabrication site in Kiryat Gat, south of Tel Aviv. The incentives amount to 12.8% of Intel's planned investment.

"The expansion plan for the Kiryat Gat site is an important part of Intel's efforts to foster a more resilient global supply chain, alongside the company's ongoing and planned manufacturing investments in Europe and the US," Intel said in a statement Tuesday. Intel is among chipmakers diversifying manufacturing outside of Asia, which dominates chip production. The semiconductor pioneer is trying to restore its technological heft after being overtaken by rivals including Nvidia and Taiwan Semiconductor Manufacturing Co.

AMD

Ryzen vs. Meteor Lake: AMD's AI Often Wins, Even On Intel's Hand-Picked Tests (tomshardware.com) 6

Velcroman1 writes: Intel's new generation of "Meteor Lake" mobile CPUs herald a new age of "AI PCs," computers that can handle inference workloads such as generating images or transcribing audio without an Internet connection. Officially named "Intel Core Ultra" processors, the chips are the first to feature an NPU (neural processing unit) that's purpose-built to handle AI tasks. But there are few ways to actually test this feature at present: software will need to be rewritten to specifically direct operations at the NPU.

Intel has steered testers toward its Open Visual Inference and Neural Network Optimization (OpenVINO) AI toolkit. With those benchmarks, Tom's Hardware tested the new Intel chips against AMD -- and surprisingly, AMD chips often came out on top, even on these hand-selected benchmarks. Clearly, optimization will take some time!

Intel

Intel Unveils New AI Chip To Compete With Nvidia and AMD (cnbc.com) 13

Intel unveiled new computer chips on Thursday, including Gaudi3, an AI chip for generative AI software. Gaudi3 will launch next year and will compete with rival chips from Nvidia and AMD that power big and power-hungry AI models. From a report: The most prominent AI models, like OpenAI's ChatGPT, run on Nvidia GPUs in the cloud. It's one reason Nvidia stock has been up nearly 230% year-to-date while Intel shares are up 68%. And it's why companies like AMD and, now Intel, have announced chips that they hope will attract AI companies away from Nvidia's dominant position in the market.

While the company was light on details, Gaudi3 will compete with Nvidia's H100, the main choice among companies that build huge farms of the chips to power AI applications, and AMD's forthcoming MI300X, when it starts shipping to customers in 2024. Intel has been building Gaudi chips since 2019, when it bought a chip developer called Habana Labs.

Intel

Intel Core Ultra Processors Debut for AI-powered PCs (venturebeat.com) 27

Intel launched its Intel Core Ultra processors for AI-powered PCs at its AI Everywhere event today. From a report: The big chip maker said these processors spearhead a new era in computing, offering unparalleled power efficiency, superior compute and graphics performance, and an unprecedented AI PC experience to mobile platforms and edge devices. Available immediately, these processors will be used in over 230 AI PCs coming from renowned partners like Acer, ASUS, Dell, Gigabyte, and more.

The Intel Core Ultra processors represent an architectural shift for Intel, marking its largest design change in 40 years. These processors harness the Intel 4 process technology and Foveros 3D advanced packaging, leveraging leading-edge processes for optimal performance and capabilities. The processors boast a performance-core (P-core) architecture enhancing instructions per cycle (IPC). Efficient-cores (E-cores) and low-power Efficient-cores (LP E-cores). They deliver up to 11% more compute power compared to competitors, ensuring superior CPU performance for ultrathin PCs.

Features of Intel Core Ultra
Intel Arc GPU: Featuring up to eight Xe-cores, this GPU incorporates AI-based Xe Super Sampling, offering double the graphics performance compared to prior generations. It includes support for modern graphics features like ray tracing, mesh shading, AV1 encode and decode, HDMI 2.1, and DisplayPort 2.1 20G.
AI Boost NPU: Intel's latest NPU, Intel AI Boost, focuses on low-power, long-running AI tasks, augmenting AI processing on the CPU and GPU, offering 2.5x better power efficiency compared to its predecessors.
Advanced Performance Capabilities: With up to 16 cores, 22 threads, and Intel Thread Director for optimized workload scheduling, these processors boast a maximum turbo frequency of 5.1 GHz and support for up to 96 GB DDR5 memory capacity.
Cutting-edge Connectivity: Integrated Intel Wi-Fi 6E and support for discrete Intel Wi-Fi 7 deliver blazing wireless speeds, while Thunderbolt 4 ensures connectivity to multiple 4K monitors and fast storage with speeds of 40 Gbps.
Enhanced AI Performance: OpenVINO toolkits, ONNX, and ONNX Runtime offer streamlined workflow, automatic device detection, and enhanced AI performance.

Portables (Apple)

First AirJet-Equipped Mini PC Tested (tomshardware.com) 49

An anonymous reader quotes a report from Tom's Hardware: Zotac's ZBox PI430AJ mini PC is the first computer to use Frore System's fanless AirJet cooler, and as tested by HKEPC, it's not a gimmick. Two AirJet coolers were able to keep Intel's N300 CPU below 70 degrees Celsius under load, allowing for an incredibly thin mini PC with impressive performance. AirJet is the only active cooling solution for PCs that doesn't use fans; even so-called liquid coolers still use fans. Instead of using fans to push and pull air, AirJet uses ultrasonic waves, which have a variety of benefits: lower power consumption, near-silent operation, and a much thinner and smaller size. AirJet coolers can also do double duty as both intake and exhaust vents, whereas a fan can only do intake or exhaust, not both.

Equipped with two of the smaller AirJet Mini models, which are rated to cool 5.25 watts of heat each, the ZBox PI430AJ is just 23.7mm thick, or 0.93 inches. The mini PC's processor is Intel's low-end N300 Atom CPU with a TDP of 7 watts, and after HKEPC put the ZBox through a half-hour-long stress test, the N300 only peaked at 67 C. That's all thanks to AirJet being so thin and being able to both intake and exhaust air. For comparison, Beelink's Mini S12 Pro mini PC with the lower-power N100, which has a TDP of 6 watts, is 1.54 inches thick (66% thicker than the ZBox PI430AJ). Traditional fan-equipped coolers just can't match AirJet coolers in size, which is perhaps AirJet's biggest advantage.
Last month, engineers from Frore Systems integrated the AirJet into an M2-based Apple MacBook Air. "With proper cooling, the relatively inexpensive laptop matched the performance of a more expensive MacBook Pro based on the same processor," reports Tom's Hardware.
Bug

Nearly Every Windows and Linux Device Vulnerable To New LogoFAIL Firmware Attack (arstechnica.com) 69

"Researchers have identified a large number of bugs to do with the processing of images at boot time," writes longtime Slashdot reader jd. "This allows malicious code to be installed undetectably (since the image doesn't have to pass any validation checks) by appending it to the image. None of the current secure boot mechanisms are capable of blocking the attack." Ars Technica reports: LogoFAIL is a constellation of two dozen newly discovered vulnerabilities that have lurked for years, if not decades, in Unified Extensible Firmware Interfaces responsible for booting modern devices that run Windows or Linux. The vulnerabilities are the product of almost a year's worth of work by Binarly, a firm that helps customers identify and secure vulnerable firmware. The vulnerabilities are the subject of a coordinated mass disclosure released Wednesday. The participating companies comprise nearly the entirety of the x64 and ARM CPU ecosystem, starting with UEFI suppliers AMI, Insyde, and Phoenix (sometimes still called IBVs or independent BIOS vendors); device manufacturers such as Lenovo, Dell, and HP; and the makers of the CPUs that go inside the devices, usually Intel, AMD or designers of ARM CPUs. The researchers unveiled the attack on Wednesday at the Black Hat Security Conference in London.

As its name suggests, LogoFAIL involves logos, specifically those of the hardware seller that are displayed on the device screen early in the boot process, while the UEFI is still running. Image parsers in UEFIs from all three major IBVs are riddled with roughly a dozen critical vulnerabilities that have gone unnoticed until now. By replacing the legitimate logo images with identical-looking ones that have been specially crafted to exploit these bugs, LogoFAIL makes it possible to execute malicious code at the most sensitive stage of the boot process, which is known as DXE, short for Driver Execution Environment. "Once arbitrary code execution is achieved during the DXE phase, it's game over for platform security," researchers from Binarly, the security firm that discovered the vulnerabilities, wrote in a whitepaper. "From this stage, we have full control over the memory and the disk of the target device, thus including the operating system that will be started." From there, LogoFAIL can deliver a second-stage payload that drops an executable onto the hard drive before the main OS has even started. The following video demonstrates a proof-of-concept exploit created by the researchers. The infected device -- a Gen 2 Lenovo ThinkCentre M70s running an 11th-Gen Intel Core with a UEFI released in June -- runs standard firmware defenses, including Secure Boot and Intel Boot Guard.
LogoFAIL vulnerabilities are tracked under the following designations: CVE-2023-5058, CVE-2023-39538, CVE-2023-39539, and CVE-2023-40238. However, this list is currently incomplete.

"A non-exhaustive list of companies releasing advisories includes AMI (PDF), Insyde, Phoenix, and Lenovo," reports Ars. "People who want to know if a specific device is vulnerable should check with the manufacturer."

"The best way to prevent LogoFAIL attacks is to install the UEFI security updates that are being released as part of Wednesday's coordinated disclosure process. Those patches will be distributed by the manufacturer of the device or the motherboard running inside the device. It's also a good idea, when possible, to configure UEFIs to use multiple layers of defenses. Besides Secure Boot, this includes both Intel Boot Guard and, when available, Intel BIOS Guard. There are similar additional defenses available for devices running AMD or ARM CPUs."

Slashdot Top Deals