×
Wireless Networking

Wi-Fi 7 is Ready To Go Mainstream (androidcentral.com) 28

The Wi-Fi Alliance is now starting to certify devices that use the latest generation of wireless connectivity, and the goal is to make sure these devices work with each other seamlessly. Android Central: Basically, the certification allows router brands and device manufacturers to guarantee that their products will work with other Wi-Fi 7 devices. Qualcomm, for its part, is announcing that it has several designs that leverage Wi-Fi 7, and that it achieved the Wi-Fi Alliance certification -- dubbed Wi-Fi Certified 7 -- for the FastConnect 7800 module that's baked into the Snapdragon 8 Gen 3 and 8 Gen 2, and the Networking Pro portfolio.

Wi-Fi Certified 7 is designed to enable interoperability, and ensure that devices from various brands work without any issues. In addition to Qualcomm, the likes of MediaTek, Intel, Broadcom, CommScope, and MaxLinear are also picking up certifications for their latest networking products. I chatted with Andy Davidson, Sr. Director of Technology Planning at Qualcomm, ahead of the announcement to understand a little more about how Wi-Fi 7 is different. Wi-Fi 7 uses the 6GHz band -- similar to Wi-Fi 6E -- but introduces 320Mhz channels that have the potential to deliver significantly greater bandwidth. Wi-Fi 7 also uses a clever new feature called Multi-Link Operation (MLO) that lets devices connect to two bands at the same time, leading to better signal strength and bandwidth.
Further reading: Wi-Fi 7 Signals the Industry's New Priority: Stability.
Hardware

Oldest-Known Version of MS-DOS's Predecessor Discovered (arstechnica.com) 70

An anonymous reader quotes a report from The Guardian: Microsoft's MS-DOS (and its IBM-branded counterpart, PC DOS) eventually became software juggernauts, powering the vast majority of PCs throughout the '80s and serving as the underpinnings of Windows throughout the '90s. But the software had humble beginnings, as we've detailed in our history of the IBM PC and elsewhere. It began in mid-1980 as QDOS, or "Quick and Dirty Operating System," the work of developer Tim Paterson at a company called Seattle Computer Products (SCP). It was later renamed 86-DOS, after the Intel 8086 processor, and this was the version that Microsoft licensed and eventually purchased.

Last week, Internet Archive user f15sim discovered and uploaded a new-old version of 86-DOS to the Internet Archive. Version 0.1-C of 86-DOS is available for download here and can be run using the SIMH emulator; before this, the earliest extant version of 86-DOS was version 0.34, also uploaded by f15sim. This version of 86-DOS is rudimentary even by the standards of early-'80s-era DOS builds and includes just a handful of utilities, a text-based chess game, and documentation for said chess game. But as early as it is, it remains essentially recognizable as the DOS that would go on to take over the entire PC business. If you're just interested in screenshots, some have been posted by user NTDEV on the site that used to be Twitter.

According to the version history available on Wikipedia, this build of 86-DOS would date back to roughly August of 1980, shortly after it lost the "QDOS" moniker. By late 1980, SCP was sharing version 0.3x of the software with Microsoft, and by early 1981, it was being developed as the primary operating system of the then-secret IBM Personal Computer. By the middle of 1981, roughly a year after 86-DOS began life as QDOS, Microsoft had purchased the software outright and renamed it MS-DOS. Microsoft and IBM continued to co-develop MS-DOS for many years; the version IBM licensed and sold on its PCs was called PC DOS, though for most of their history the two products were identical. Microsoft also retained the ability to license the software to other computer manufacturers as MS-DOS, which contributed to the rise of a market of mostly interoperable PC clones. The PC market as we know it today still more or less resembles the PC-compatible market of the mid-to-late 1980s, albeit with dramatically faster and more capable components.

Desktops (Apple)

Inside Apple's Massive Push To Transform the Mac Into a Gaming Paradise (inverse.com) 144

Apple is reinvesting in gaming with advanced Mac hardware, improvements to Apple silicon, and gaming-focused software, aiming not to repeat its past mistakes and capture a larger share of the gaming market. In an article for Inverse, Raymond Wong provides an in-depth overview of this endeavor, including commentary from Apple's marketing managers Gordon Keppel, Leland Martin, and Doug Brooks. Here's an excerpt from the report: Gaming on the Mac in the 1990s until 2020, when Apple made a big shift to its own custom silicon, could be boiled down to this: Apple was in a hardware arms race with the PC that it couldn't win. Mac gamers were hopeful that the switch from PowerPC to Intel CPUs starting in 2005 would turn things around, but it didn't because by then, GPUs started becoming the more important hardware component for running 3D games, and the Mac's support for third-party GPUs could only be described as lackluster. Fast forward to 2023, and Apple has a renewed interest in gaming on the Mac, the likes of which it hasn't shown in the last 25 years. "Apple silicon has changed all that," Keppel tells Inverse. "Now, every Mac that ships with Apple silicon can play AAA games pretty fantastically. Apple silicon has been transformative of our mainstream systems that got tremendous boosts in graphics with M1, M2, and now with M3."

Ask any gadget reviewer (including myself) and they will tell you Keppel isn't just drinking the Kool-Aid because Apple pays him to. Macs with Apple silicon really are performant computers that can play some of the latest PC and console games. In three generations of desktop-class chip design, Apple has created a platform with "tens of millions of Apple silicon Macs," according to Keppel. That's tens of millions of Macs with monstrous CPU and GPU capabilities for running graphics-intensive games. Apple's upgrades to the GPUs on its silicon are especially impressive. The latest Apple silicon, the M3 family of chips, supports hardware-accelerated ray-tracing and mesh shading, features that only a few years ago didn't seem like they would ever be a priority, let alone ones that are built into the entire spectrum of MacBook Pros.

The "magic" of Apple silicon isn't just performance, says Leland Martin, an Apple software marketing manager. Whereas Apple's fallout with game developers on the Mac previously came down to not supporting specific computer hardware, Martin says Apple silicon started fresh with a unified hardware platform that not only makes it easier for developers to create Mac games for, but will allow for those games to run on other Apple devices. "If you look at the Mac lineup just a few years ago, there was a mix of both integrated and discrete GPUs," Martin says. "That can add complexity when you're developing games. Because you have multiple different hardware permutations to consider. Today, we've effectively eliminated that completely with Apple silicon, creating a unified gaming platform now across iPhone, iPad, and Mac. Once a game is designed for one platform, it's a straightforward process to bring it to the other two. We're seeing this play out with games like Resident Evil Village that launched first [on Mac] followed by iPhone and iPad."

"Gaming was fundamentally part of the Apple silicon design,â Doug Brooks, also on the Mac product marketing team, tells Inverse. "Before a chip even exists, gaming is fundamentally incorporated during those early planning stages and then throughout development. I think, big picture, when we design our chips, we really look at building balanced systems that provide great CPU, GPU, and memory performance. Of course, [games] need powerful GPUs, but they need all of those features, and our chips are designed to deliver on that goal. If you look at the chips that go in the latest consoles, they look a lot like that with integrated CPU, GPU, and memory." [...] "One thing we're excited about with this most recent launch of the M3 family of chips is that we're able to bring these powerful new technologies, Dynamic Caching, as well as ray-tracing and mesh shading across our entire line of chips," Brook adds. "We didn't start at the high end and trickle them down over time. We really wanted to bring that to as many customers as possible."

Microsoft

Microsoft Readies 'Next-Gen' AI-Focused PCs (windowscentral.com) 23

Microsoft is working on significant updates to its Surface Pro and Surface Laptop lines. According to Windows Central, new devices "will be announced in the spring and will be marketed as Microsoft's first true next-gen AI PCs." From the report: For the first time, both Surface Pro and Surface Laptop will be available in Intel and Arm flavors, and both will have next-gen NPU (neural processing unit) silicon. Sources are particularly excited about the Arm variants, which I understand will be powered by a custom version of Qualcomm's new Snapdragon X Series chips. Internally, Microsoft is calling next-generation Arm devices powered by Qualcomm's new chips "CADMUS" PCs. These PCs are purpose-built for the next version of Windows, codenamed Hudson Valley, and will utilize many of the upcoming next-gen AI experiences Microsoft is building into the 2024 release of Windows. Specifically, Microsoft touts CADMUS PCs as being genuinely competitive with Apple Silicon, sporting similar battery life, performance, and security. The next Surface Pro and Surface Laptop are expected to be some of the first CADMUS PCs to ship next year in preparation for the Hudson Valley release coming later in 2024.

So, what's changing with the Surface Laptop 6? I'm told this new Surface Laptop will finally have an updated design with thinner bezels, rounded display corners, and more ports. This will be the first time that Microsoft's Surface Laptop line is getting a design refresh, which is well overdue. The Surface Laptop 6 will again be available in two sizes. However, I'm told the smaller model will have a slightly larger 13.8-inch display, up from 13.5 inches on the Surface Laptop 5. Sources say the larger model remains at 15-inches. I'm told Surface Laptop 6 will also have an expanded selection of ports, including two USB-C ports and one USB-A port, along with the magnetic Surface Connect charging port. Microsoft is also adding a haptic touchpad (likely with Sensel technology) and a dedicated Copilot button on the keyboard deck for quick access to Windows Copilot.

The next Surface Pro is also shaping into a big update, although not as drastic as the Surface Laptop 6. According to my sources, the most significant changes coming to Surface Pro 10 are mostly related to its display, which sources say is now brighter with support for HDR content, has a new anti-reflective coating to reduce glare, and now also sports rounded display corners. I've also heard that Microsoft is testing a version of Surface Pro 10 with a slightly lower-resolution 2160 x 1440 display, down from the 2880 x 1920 screen found on previous Surface Pro models. Sources say this lower-resolution panel is only being considered for lower-tier models, meaning the more expensive models will continue to ship with the higher-resolution display. Lastly, I also hear Microsoft is equipping the next Surface Pro with an NFC reader for commercial customers and a wider FoV webcam, which will be enhanced with Windows Studio Effects. It should also be available in new colors. I've also heard we may get an updated Type Cover accessory with a dedicated Copilot button for quick access to Windows Copilot.

Intel

12VO Power Standard Appears To Be Gaining Steam, Will Reduce PC Cables and Costs (tomshardware.com) 79

An anonymous reader quotes a report from Tom's Hardware: The 12VO power standard (PDF), developed by Intel, is designed to reduce the number of power cables needed to power a modern PC, ultimately reducing cost. While industry uptake of the standard has been slow, a new slew of products from MSI indicates that 12VO is gaining traction.

MSI is gearing up with two 12VO-compliant motherboards, covering both Intel and AMD platforms, and a 12VO Power Supply that it's releasing simultaneously: The Pro B650 12VO WiFi, Pro H610M 12VO, and MSI 12VO PSU power supply are all 'coming soon,' which presumably means they'll officially launch at CES 2024. HardwareLux got a pretty good look at MSI's offerings during its EHA (European Hardware Awards) tech tour, including the 'Project Zero' we covered earlier. One of the noticeable changes is the absence of a 24-pin ATX connector, as the ATX12VO connectors use only ten-pin connectors. The publications also saw a 12VO-compliant FSP power supply in a compact system with a thick graphics card.

A couple of years ago, we reported on FSP 650-watt and 750-watt SFX 12VO power supply. Apart from that, there is a 1x 6-pin ATX12VO termed 'extra board connector' according to its manual and a 1x 8-pin 12V power connector for the CPU. There are two smaller 4-pin connectors that will provide the 5V power needed for SATA drives. It is likely each of these connectors provides power to two SATA-based drives. Intel proposed the ATX12VO standard several years ago, but adoption has been slow until now. This standard is designed to provide 12v exclusively, completely removing a direct 3.3v and 5v supply. The success of the new standard will depend on the wide availability of the motherboard and power supplies.

Intel

Intel To Invest $25 Billion in Israel After Winning Incentives (bloomberg.com) 150

Intel confirmed it will invest a total of $25 billion in Israel after securing $3.2 billion in incentives from the country's government. From a report: The outlay, announced by the Israeli government in June and unconfirmed by Intel until now, will go toward an expansion of the company's wafer fabrication site in Kiryat Gat, south of Tel Aviv. The incentives amount to 12.8% of Intel's planned investment.

"The expansion plan for the Kiryat Gat site is an important part of Intel's efforts to foster a more resilient global supply chain, alongside the company's ongoing and planned manufacturing investments in Europe and the US," Intel said in a statement Tuesday. Intel is among chipmakers diversifying manufacturing outside of Asia, which dominates chip production. The semiconductor pioneer is trying to restore its technological heft after being overtaken by rivals including Nvidia and Taiwan Semiconductor Manufacturing Co.

AMD

Ryzen vs. Meteor Lake: AMD's AI Often Wins, Even On Intel's Hand-Picked Tests (tomshardware.com) 6

Velcroman1 writes: Intel's new generation of "Meteor Lake" mobile CPUs herald a new age of "AI PCs," computers that can handle inference workloads such as generating images or transcribing audio without an Internet connection. Officially named "Intel Core Ultra" processors, the chips are the first to feature an NPU (neural processing unit) that's purpose-built to handle AI tasks. But there are few ways to actually test this feature at present: software will need to be rewritten to specifically direct operations at the NPU.

Intel has steered testers toward its Open Visual Inference and Neural Network Optimization (OpenVINO) AI toolkit. With those benchmarks, Tom's Hardware tested the new Intel chips against AMD -- and surprisingly, AMD chips often came out on top, even on these hand-selected benchmarks. Clearly, optimization will take some time!

Intel

Intel Unveils New AI Chip To Compete With Nvidia and AMD (cnbc.com) 13

Intel unveiled new computer chips on Thursday, including Gaudi3, an AI chip for generative AI software. Gaudi3 will launch next year and will compete with rival chips from Nvidia and AMD that power big and power-hungry AI models. From a report: The most prominent AI models, like OpenAI's ChatGPT, run on Nvidia GPUs in the cloud. It's one reason Nvidia stock has been up nearly 230% year-to-date while Intel shares are up 68%. And it's why companies like AMD and, now Intel, have announced chips that they hope will attract AI companies away from Nvidia's dominant position in the market.

While the company was light on details, Gaudi3 will compete with Nvidia's H100, the main choice among companies that build huge farms of the chips to power AI applications, and AMD's forthcoming MI300X, when it starts shipping to customers in 2024. Intel has been building Gaudi chips since 2019, when it bought a chip developer called Habana Labs.

Intel

Intel Core Ultra Processors Debut for AI-powered PCs (venturebeat.com) 27

Intel launched its Intel Core Ultra processors for AI-powered PCs at its AI Everywhere event today. From a report: The big chip maker said these processors spearhead a new era in computing, offering unparalleled power efficiency, superior compute and graphics performance, and an unprecedented AI PC experience to mobile platforms and edge devices. Available immediately, these processors will be used in over 230 AI PCs coming from renowned partners like Acer, ASUS, Dell, Gigabyte, and more.

The Intel Core Ultra processors represent an architectural shift for Intel, marking its largest design change in 40 years. These processors harness the Intel 4 process technology and Foveros 3D advanced packaging, leveraging leading-edge processes for optimal performance and capabilities. The processors boast a performance-core (P-core) architecture enhancing instructions per cycle (IPC). Efficient-cores (E-cores) and low-power Efficient-cores (LP E-cores). They deliver up to 11% more compute power compared to competitors, ensuring superior CPU performance for ultrathin PCs.

Features of Intel Core Ultra
Intel Arc GPU: Featuring up to eight Xe-cores, this GPU incorporates AI-based Xe Super Sampling, offering double the graphics performance compared to prior generations. It includes support for modern graphics features like ray tracing, mesh shading, AV1 encode and decode, HDMI 2.1, and DisplayPort 2.1 20G.
AI Boost NPU: Intel's latest NPU, Intel AI Boost, focuses on low-power, long-running AI tasks, augmenting AI processing on the CPU and GPU, offering 2.5x better power efficiency compared to its predecessors.
Advanced Performance Capabilities: With up to 16 cores, 22 threads, and Intel Thread Director for optimized workload scheduling, these processors boast a maximum turbo frequency of 5.1 GHz and support for up to 96 GB DDR5 memory capacity.
Cutting-edge Connectivity: Integrated Intel Wi-Fi 6E and support for discrete Intel Wi-Fi 7 deliver blazing wireless speeds, while Thunderbolt 4 ensures connectivity to multiple 4K monitors and fast storage with speeds of 40 Gbps.
Enhanced AI Performance: OpenVINO toolkits, ONNX, and ONNX Runtime offer streamlined workflow, automatic device detection, and enhanced AI performance.

Portables (Apple)

First AirJet-Equipped Mini PC Tested (tomshardware.com) 49

An anonymous reader quotes a report from Tom's Hardware: Zotac's ZBox PI430AJ mini PC is the first computer to use Frore System's fanless AirJet cooler, and as tested by HKEPC, it's not a gimmick. Two AirJet coolers were able to keep Intel's N300 CPU below 70 degrees Celsius under load, allowing for an incredibly thin mini PC with impressive performance. AirJet is the only active cooling solution for PCs that doesn't use fans; even so-called liquid coolers still use fans. Instead of using fans to push and pull air, AirJet uses ultrasonic waves, which have a variety of benefits: lower power consumption, near-silent operation, and a much thinner and smaller size. AirJet coolers can also do double duty as both intake and exhaust vents, whereas a fan can only do intake or exhaust, not both.

Equipped with two of the smaller AirJet Mini models, which are rated to cool 5.25 watts of heat each, the ZBox PI430AJ is just 23.7mm thick, or 0.93 inches. The mini PC's processor is Intel's low-end N300 Atom CPU with a TDP of 7 watts, and after HKEPC put the ZBox through a half-hour-long stress test, the N300 only peaked at 67 C. That's all thanks to AirJet being so thin and being able to both intake and exhaust air. For comparison, Beelink's Mini S12 Pro mini PC with the lower-power N100, which has a TDP of 6 watts, is 1.54 inches thick (66% thicker than the ZBox PI430AJ). Traditional fan-equipped coolers just can't match AirJet coolers in size, which is perhaps AirJet's biggest advantage.
Last month, engineers from Frore Systems integrated the AirJet into an M2-based Apple MacBook Air. "With proper cooling, the relatively inexpensive laptop matched the performance of a more expensive MacBook Pro based on the same processor," reports Tom's Hardware.
Bug

Nearly Every Windows and Linux Device Vulnerable To New LogoFAIL Firmware Attack (arstechnica.com) 69

"Researchers have identified a large number of bugs to do with the processing of images at boot time," writes longtime Slashdot reader jd. "This allows malicious code to be installed undetectably (since the image doesn't have to pass any validation checks) by appending it to the image. None of the current secure boot mechanisms are capable of blocking the attack." Ars Technica reports: LogoFAIL is a constellation of two dozen newly discovered vulnerabilities that have lurked for years, if not decades, in Unified Extensible Firmware Interfaces responsible for booting modern devices that run Windows or Linux. The vulnerabilities are the product of almost a year's worth of work by Binarly, a firm that helps customers identify and secure vulnerable firmware. The vulnerabilities are the subject of a coordinated mass disclosure released Wednesday. The participating companies comprise nearly the entirety of the x64 and ARM CPU ecosystem, starting with UEFI suppliers AMI, Insyde, and Phoenix (sometimes still called IBVs or independent BIOS vendors); device manufacturers such as Lenovo, Dell, and HP; and the makers of the CPUs that go inside the devices, usually Intel, AMD or designers of ARM CPUs. The researchers unveiled the attack on Wednesday at the Black Hat Security Conference in London.

As its name suggests, LogoFAIL involves logos, specifically those of the hardware seller that are displayed on the device screen early in the boot process, while the UEFI is still running. Image parsers in UEFIs from all three major IBVs are riddled with roughly a dozen critical vulnerabilities that have gone unnoticed until now. By replacing the legitimate logo images with identical-looking ones that have been specially crafted to exploit these bugs, LogoFAIL makes it possible to execute malicious code at the most sensitive stage of the boot process, which is known as DXE, short for Driver Execution Environment. "Once arbitrary code execution is achieved during the DXE phase, it's game over for platform security," researchers from Binarly, the security firm that discovered the vulnerabilities, wrote in a whitepaper. "From this stage, we have full control over the memory and the disk of the target device, thus including the operating system that will be started." From there, LogoFAIL can deliver a second-stage payload that drops an executable onto the hard drive before the main OS has even started. The following video demonstrates a proof-of-concept exploit created by the researchers. The infected device -- a Gen 2 Lenovo ThinkCentre M70s running an 11th-Gen Intel Core with a UEFI released in June -- runs standard firmware defenses, including Secure Boot and Intel Boot Guard.
LogoFAIL vulnerabilities are tracked under the following designations: CVE-2023-5058, CVE-2023-39538, CVE-2023-39539, and CVE-2023-40238. However, this list is currently incomplete.

"A non-exhaustive list of companies releasing advisories includes AMI (PDF), Insyde, Phoenix, and Lenovo," reports Ars. "People who want to know if a specific device is vulnerable should check with the manufacturer."

"The best way to prevent LogoFAIL attacks is to install the UEFI security updates that are being released as part of Wednesday's coordinated disclosure process. Those patches will be distributed by the manufacturer of the device or the motherboard running inside the device. It's also a good idea, when possible, to configure UEFIs to use multiple layers of defenses. Besides Secure Boot, this includes both Intel Boot Guard and, when available, Intel BIOS Guard. There are similar additional defenses available for devices running AMD or ARM CPUs."
Intel

Intel Calls AMD's Chips 'Snake Oil' (tomshardware.com) 189

Aaron Klotz, reporting for Tom's Hardware: Intel recently published a new playbook titled "Core Truths" that put AMD under direct fire for utilizing its older Zen 2 CPU architecture in its latest Ryzen 7000 mobile series CPU product stack. Intel later removed the document, but we have the slides below. The playbook is designed to educate customers about AMD's product stack and even calls it "snake oil."

Intel's playbook specifically talks about AMD's latest Ryzen 5 7520U, criticizing the fact it features AMD's Zen 2 architecture from 2019 even though it sports a Ryzen 7000 series model name. Further on in the playbook, the company accuses AMD of selling "half-truths" to unsuspecting customers, stressing that the future of younger kid's education needs the best CPU performance from the latest and greatest CPU technologies made today. To make its point clear, Intel used images in its playbook referencing "snake oil" and images of used car salesmen.

The playbook also criticizes AMD's new naming scheme for its Ryzen 7000 series mobile products, quoting ArsTechnica: "As a consumer, you're still intended to see the number 7 and think, 'Oh, this is new.'" Intel also published CPU benchmark comparisons of the 7520U against its 13th Gen Core i5-1335U to back up its points. Unsurprisingly, the 1335U was substantially faster than the Zen 2 counterpart.

Hardware

Apple's Chip Lab: Now 15 Years Old With Thousands of Engineers (cnbc.com) 68

"As of this year, all new Mac computers are powered by Apple's own silicon, ending the company's 15-plus years of reliance on Intel," according to a new report from CNBC.

"Apple's silicon team has grown to thousands of engineers working across labs all over the world, including in Israel, Germany, Austria, the U.K. and Japan. Within the U.S., the company has facilities in Silicon Valley, San Diego and Austin, Texas..." The latest A17 Pro announced in the iPhone 15 Pro and Pro Max in September enables major leaps in features like computational photography and advanced rendering for gaming. "It was actually the biggest redesign in GPU architecture and Apple silicon history," said Kaiann Drance, who leads marketing for the iPhone. "We have hardware accelerated ray tracing for the first time. And we have mesh shading acceleration, which allows game developers to create some really stunning visual effects." That's led to the development of iPhone-native versions from Ubisoft's Assassin's Creed Mirage, The Division Resurgence and Capcom's Resident Evil 4.

Apple says the A17 Pro is the first 3-nanometer chip to ship at high volume. "The reason we use 3-nanometer is it gives us the ability to pack more transistors in a given dimension. That is important for the product and much better power efficiency," said the head of Apple silicon, Johny Srouji . "Even though we're not a chip company, we are leading the industry for a reason." Apple's leap to 3-nanometer continued with the M3 chips for Mac computers, announced in October. Apple says the M3 enables features like 22-hour battery life and, similar to the A17 Pro, boosted graphics performance...

In a major shift for the semiconductor industry, Apple turned away from using Intel's PC processors in 2020, switching to its own M1 chip inside the MacBook Air and other Macs. "It was almost like the laws of physics had changed," Ternus said. "All of a sudden we could build a MacBook Air that's incredibly thin and light, has no fan, 18 hours of battery life, and outperformed the MacBook Pro that we had just been shipping." He said the newest MacBook Pro with Apple's most advanced chip, the M3 Max, "is 11 times faster than the fastest Intel MacBook Pro we were making. And we were shipping that just two years ago." Intel processors are based on x86 architecture, the traditional choice for PC makers, with a lot of software developed for it. Apple bases its processors on rival Arm architecture, known for using less power and helping laptop batteries last longer.

Apple's M1 in 2020 was a proving point for Arm-based processors in high-end computers, with other big names like Qualcomm — and reportedly AMD and Nvidia — also developing Arm-based PC processors. In September, Apple extended its deal with Arm through at least 2040.

Since Apple first debuted its homegrown semiconductors in 2010 in the iPhone 4, other companies started pursuing their own custom semiconductor development, including Amazon, Google, Microsoft and Tesla.

CNBC reports that Apple is also reportedly working on its own Wi-Fi and Bluetooth chip. Apple's Srouji wouldn't comment on "future technologies and products" but told CNBC "we care about cellular, and we have teams enabling that."
United States

Nvidia CEO Says US Will Take Years To Achieve Chip Independence (bloomberg.com) 121

Nvidia Chief Executive Officer Jensen Huang, who runs the semiconductor industry's most valuable company, said the US is as much as 20 years away from breaking its dependence on overseas chipmaking. From a report: Huang, speaking at the New York Times's DealBook conference in New York, explained how his company's products rely on myriad components that come from different parts of the world -- not just Taiwan, where the most important elements are manufactured. "We are somewhere between a decade and two decades away from supply chain independence," he said. "It's not a really practical thing for a decade or two."

The outlook suggests there's a long road ahead for a key Biden administration objective -- bringing more of the chipmaking industry to US shores. The president has championed bipartisan legislation to support the building of manufacturing facilities here. And many of the biggest companies are planning to expand their US operations. That includes Taiwan Semiconductor Manufacturing Co., Nvidia's top manufacturing partner, as well as Samsung and Intel.

Businesses

Nvidia Beats TSMC and Intel To Take Top Chip Industry Revenue Crown For the First Time (tomshardware.com) 21

Nvidia has swung from fourth to first place in an assessment of chip industry revenue published today. From a report: Taipei-based financial analyst Dan Nystedt noted that the green team took the revenue crown from contract chip-making titan TSMC as Q3 financials came into view. Those keeping an eye on the world of investing and finance will have seen our report about Nvidia's earnings explosion, evidenced by the firm's publishing of its Q3 FY23 results.

Nvidia charted an amazing performance, with a headlining $18.12 billion in revenue for the quarter, up 206% year-over-year (YoY). The firm's profits were also through the roof, and Nystedt posted a graph showing Nvidia elbowed past its chip industry rivals by this metric in Q3 2023, too. Nvidia's advance is supported by multiple highly successful operating segments, which have provided a multiplicative effect on its revenue and income. Again, we saw clear evidence of a seismic shift in revenue, with the latest set of financials shared with investors earlier this week.

Microsoft

Microsoft Celebrates 20th Anniversary of 'Patch Tuesday' (microsoft.com) 17

This week the Microsoft Security Response Center celebrated the 20th anniversary of Patch Tuesday updates.

In a blog post they call the updates "an initiative that has become a cornerstone of the IT world's approach to cybersecurity." Originating from the Trustworthy Computing memo by Bill Gates in 2002, our unwavering commitment to protecting customers continues to this day and is reflected in Microsoft's Secure Future Initiative announced this month. Each month, we deliver security updates on the second Tuesday, underscoring our pledge to cyber defense. As we commemorate this milestone, it's worth exploring the inception of Patch Tuesday and its evolution through the years, demonstrating our adaptability to new technology and emerging cyber threats...

Before this unified approach, our security updates were sporadic, posing significant challenges for IT professionals and organizations in deploying critical patches in a timely manner. Senior leaders of the Microsoft Security Response Center (MSRC) at the time spearheaded the idea of a predictable schedule for patch releases, shifting from a "ship when ready" model to a regular weekly, and eventually, monthly cadence...

This led to a shift from a "ship when ready" model to a regular weekly, and eventually, monthly cadence. In addition to consolidating patch releases into a monthly schedule, we also organized the security update release notes into a consolidated location. Prior to this change, customers had to navigate through various Knowledge Base articles, making it difficult to find the information they needed to secure themselves. Recognizing the need for clarity and convenience, we provided a comprehensive overview of monthly releases. This change was pivotal at a time when not all updates were delivered through Windows Update, and customers needed a reliable source to find essential updates for various products.

Patch Tuesday has also influenced other vendors in the software and hardware spaces, leading to a broader industry-wide practice of synchronized security updates. This collaborative approach, especially with hardware vendors such as AMD and Intel, aims to provide a united front against vulnerabilities, enhancing the overall security posture of our ecosystems. While the volume and complexity of updates have increased, so has the collaboration with the security community. Patch Tuesday has fostered better relationships with security researchers, leading to more responsible vulnerability disclosures and quicker responses to emerging threats...

As the landscape of security threats evolves, so does our strategy, but our core mission of safeguarding our customers remains unchanged.

Supercomputing

Linux Foundation Announces Intent to Form 'High Performance Software Foundation' (linuxfoundation.org) 5

This week the Linux Foundation "announced the intention to form the High Performance Software Foundation.

"Through a series of technical projects, the High Performance Software Foundation aims to build, promote, and advance a portable software stack for high performance computing by increasing adoption, lowering barriers to contribution, and supporting development efforts." As use of high performance computing becomes ubiquitous in scientific computing and digital engineering, and AI use cases multiply, more and more data centers deploy GPUs and other compute accelerators. The High Performance Software Foundation intends to leverage investments made by the United States Department of Energy's Exascale Computing Project, the EuroHPC Joint Undertaking, and other international projects in accelerated high performance computing to exploit the performance of this diversifying set of architectures. As an umbrella project under the Linux Foundation, HPSF intends to provide a neutral space for pivotal projects in the high performance software ecosystem, enabling industry, academia, and government entities to collaborate together on the scientific software stack.

The High Performance Software Foundation already benefits from strong support across the high performance computing landscape, including leading companies and organizations like Amazon Web Services, Argonne National Laboratory, CEA, CIQ, Hewlett Packard Enterprise, Intel, Kitware, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, NVIDIA, Oak Ridge National Laboratory, Sandia National Laboratory, and the University of Oregon.

Its first open source technical projects include:
  • Spack: the high performance computing package manager
  • Kokkos: a performance-portable programming model for writing modern C++ applications in a hardware-agnostic way.
  • AMReX: a performance-portable software framework designed to accelerate solving partial differential equations on block-structured, adaptively refined meshes.
  • WarpX: a performance-portable Particle-in-Cell code with advanced algorithms that won the 2022 Gordon Bell Prize
  • Trilinos: a collection of reusable scientific software libraries, known in particular for linear, non-linear, and transient solvers, as well as optimization and uncertainty quantification.
  • Apptainer: a container system and image format specifically designed for secure high-performance computing.
  • VTK-m: a toolkit of scientific visualization algorithms for accelerator architectures.
  • HPCToolkit: performance measurement and analysis tools for computers ranging from laptops to the world's largest GPU-accelerated supercomputers.
  • E4S: the Extreme-scale Scientific Software Stack
  • Charliecloud: high performance computing-tailored, lightweight, fully unprivileged container implementation.

Linux

Canonical Intros Microcloud: Simple, Free, On-prem Linux Clustering (theregister.com) 16

Canonical hosted an amusingly failure-filled demo of its new easy-to-install, Ubuntu-powered tool for building small-to-medium scale, on-premises high-availability clusters, Microcloud, at an event in London yesterday. From a report: The intro to the talk leaned heavily on Canonical's looming 20th anniversary, and with good reason. Ubuntu has carved out a substantial slice of the Linux market for itself on the basis of being easier to use than most of its rivals, at no cost -- something that many Linux players still seem not to fully comprehend. The presentation was as buzzword-heavy as one might expect, and it's also extensively based on Canonical's in-house tech, such as the LXD containervisor, Snap packaging, and, optionally, the Ubuntu Core snap-based immutable distro. (The only missing buzzword didn't crop up until the Q&A session, and we were pleased by its absence: it's not built on and doesn't use Kubernetes, but you can run Kubernetes on it if you wish.)

We're certain this is going to turn off or alienate a lot of the more fundamentalist Penguinistas, but we are equally sure that Canonical won't care. In the immortal words of Kevin Smith, it's not for critics. Microcloud combines several existing bits of off-the-shelf FOSS tech in order to make it easy to link from three to 50 Ubuntu machines into an in-house, private high-availability cluster, with live migration and automatic failover. It uses its own LXD containervisor to manage nodes and workloads, Ceph for distributed storage, OpenZFS for local storage, and OVN to virtualize the cluster interconnect. All the tools are packaged as snaps. It supports both x86-64 and Arm64 nodes, including Raspberry Pi kit, and clusters can mix both architectures. The event included several demonstrations using an on-stage cluster of three ODROID machines with "Intel N6005" processors, so we reckon they were ODROID H3+ units -- which we suspect the company chose because of their dual Ethernet connections.

Network

Ethernet is Still Going Strong After 50 Years (ieee.org) 81

The technology has become the standard LAN worldwide. From a report: Ethernet became commercially available in 1980 and quickly grew into the industry LAN standard. To provide computer companies with a framework for the technology, in June 1983 Ethernet was adopted as a standard by the IEEE 802 Local Area Network Standards Committee. Currently, the IEEE 802 family consists of 67 published standards, with 49 projects under development. The committee works with standards agencies worldwide to publish certain IEEE 802 standards as international guidelines.

A plaque recognizing the technology is displayed outside the PARC facility. It reads: "Ethernet wired LAN was invented at Xerox Palo Alto Research Center (PARC) in 1973, inspired by the ALOHAnet packet radio network and the ARPANET. In 1980 Xerox, DEC, and Intel published a specification for 10 Mbps Ethernet over coaxial cable that became the IEEE 802.3-1985 Standard. Later augmented for higher speeds, and twisted-pair, optical, and wireless media, Ethernet became ubiquitous in home, commercial, industrial, and academic settings worldwide."

Bug

Intel Fixes High-Severity CPU Bug That Causes 'Very Strange Behavior' (arstechnica.com) 22

An anonymous reader quotes a report from Ars Technica: Intel on Tuesday pushed microcode updates to fix a high-severity CPU bug that has the potential to be maliciously exploited against cloud-based hosts. The flaw, affecting virtually all modern Intel CPUs, causes them to "enter a glitch state where the normal rules don't apply," Tavis Ormandy, one of several security researchers inside Google who discovered the bug, reported. Once triggered, the glitch state results in unexpected and potentially serious behavior, most notably system crashes that occur even when untrusted code is executed within a guest account of a virtual machine, which, under most cloud security models, is assumed to be safe from such faults. Escalation of privileges is also a possibility.

The bug, tracked under the common name Reptar and the designation CVE-2023-23583, is related to how affected CPUs manage prefixes, which change the behavior of instructions sent by running software. Intel x64 decoding generally allows redundant prefixes -- meaning those that don't make sense in a given context -- to be ignored without consequence. During testing in August, Ormandy noticed that the REX prefix was generating "unexpected results" when running on Intel CPUs that support a newer feature known as fast short repeat move, which was introduced in the Ice Lake architecture to fix microcoding bottlenecks. The unexpected behavior occurred when adding the redundant rex.r prefixes to the FSRM-optimized rep mov operation. [...]

Intel's official bulletin lists two classes of affected products: those that were already fixed and those that are fixed using microcode updates released Tuesday. An exhaustive list of affected CPUs is available here. As usual, the microcode updates will be available from device or motherboard manufacturers. While individuals aren't likely to face any immediate threat from this vulnerability, they should check with the manufacturer for a fix. People with expertise in x86 instruction and decoding should read Ormandy's post in its entirety. For everyone else, the most important takeaway is this: "However, we simply don't know if we can control the corruption precisely enough to achieve privilege escalation." That means it's not possible for people outside of Intel to know the true extent of the vulnerability severity. That said, anytime code running inside a virtual machine can crash the hypervisor the VM runs on, cloud providers like Google, Microsoft, Amazon, and others are going to immediately take notice.

Slashdot Top Deals