AI

Zig Quits GitHub, Says Microsoft's AI Obsession Has Ruined the Service 68

The Zig Software Foundation has quit GitHub after years of unresolved GitHub Actions bugs -- including a "safe_sleep" script that could spin forever and cripple CI runners. Zig leadership puts the blame on Microsoft's growing AI-first priorities and declining engineering quality. Other open-source developers are voicing similar frustrations. The Register reports: The drama began in April 2025 when GitHub user AlekseiNikiforovIBM started a thread titled "safe_sleep.sh rarely hangs indefinitely." GitHub addressed the problem in August, but didn't reveal that in the thread, which remained open until Monday. That timing appears notable. Last week, Andrew Kelly, president and lead developer of the Zig Software Foundation, announced that the Zig project is moving to Codeberg, a non-profit git hosting service, because GitHub no longer demonstrates commitment to engineering excellence.

One piece of evidence he offered for that assessment was the "safe_sleep.sh rarely hangs indefinitely" thread. "Most importantly, Actions has inexcusable bugs while being completely neglected," Kelly wrote. "After the CEO of GitHub said to 'embrace AI or get out', it seems the lackeys at Microsoft took the hint, because GitHub Actions started 'vibe-scheduling' -- choosing jobs to run seemingly at random. Combined with other bugs and inability to manually intervene, this causes our CI system to get so backed up that not even master branch commits get checked."
Businesses

Anthropic Acquires Bun In First Acquisition 10

Anthropic has made its first acquisition by buying Bun, the engine behind its fast-growing Claude Code agent. The move strengthens Anthropic's push into enterprise developer tooling as it scales Claude Code with major backers like Microsoft, Nvidia, Amazon, and Google. Adweek reports: Claude Code is a coding agent that lets developers write, debug and interpret code through natural-language instructions. Claude Code had already hit $1 billion in revenue six months since its public debut in May, according to a LinkedIn post from Anthropic's chief product officer, Mike Krieger. The coding agent continues to barrel toward scale with customers like Netflix, Spotify, and Salesforce. Further reading: Meet Bun, a Speedy New JavaScript Runtime
AI

OpenAI Declares 'Code Red' As Google Catches Up In AI Race 50

OpenAI has reportedly issued a "code red" on Monday, pausing projects like ads, shopping agents, health tools, and its Pulse assistant to focus entirely on improving ChatGPT. "This includes core features like greater speed and reliability, better personalization, and the ability to answer more questions," reports The Verge, citing a memo reported by the Wall Street Journal and The Information. "There will be a daily call for those tasked with improving the chatbot, the memo said, and Altman encouraged temporary team transfers to speed up development." From the report: The newfound urgency illustrates an inflection point for OpenAI as it spends hundreds of billions of dollars to fund growth and figures out a path to future profitability. It is also something of a full-circle moment in the AI race. Google, which declared its own "code red" after the arrival of ChatGPT, is a particular concern. Google's AI user base is growing -- helped by the success of popular tools like the Nano Banana image model -- and its latest AI model, Gemini 3, blew past its competitors on many industry benchmarks and popular metrics.
AI

Amazon To Use Nvidia Tech In AI Chips, Roll Out New Servers 7

AWS is deepening its partnership with Nvidia by adopting "NVLink Fusion" in its upcoming Trainium4 AI chips. "The NVLink technology creates speedy connections between different kinds of chips and is one of Nvidia's crown jewels," notes Reuters. From the report: Nvidia has been pushing to sign up other chip firms to adopt its NVLink technology, with Intel, Qualcomm and now AWS on board. The technology will help AWS build bigger AI servers that can recognize and communicate with one another faster, a critical factor in training large AI models, in which thousands of machines must be strung together. As part of the Nvidia partnership, customers will have access to what AWS is calling AI Factories, exclusive AI infrastructure inside their own data centers for greater speed and readiness.

Separately, Amazon said it is rolling out new servers based on a chip called Trainium3. The new servers, available on Tuesday, each contain 144 chips and have more than four times the computing power of AWS's previous generation of AI, while using 40% less power, Dave Brown, vice president of AWS compute and machine learning services, told Reuters. Brown did not give absolute figures on power or performance, but said AWS aims to compete with rivals -- including Nvidia -- based on price.
"Together, Nvidia and AWS are creating the compute fabric for the AI industrial revolution - bringing advanced AI to every company, in every country, and accelerating the world's path to intelligence," Nvidia CEO Jensen Huang said in a statement.
Data Storage

Google's Vibe Coding Platform Deletes Entire Drive 95

A Google Antigravity user says the AI-driven "vibe coding" tool accidentally wiped his entire D: drive while trying to clear a project cache. Google says it's investigating, but the episode adds to a growing list of AI tools behaving in ways that "would get a junior developer fired," suggests The Register. From the report: We reached out to the user, a photographer and graphic designer from Greece, who asked we only identify him as Tassos M because he doesn't want to be permanently linked online to what could "become a controversy or conspiracy against Google." [...] Tassos told Antigravity to help him develop software that's useful for any photographer who has to choose a few prime shots from a mountain of snaps. He wanted the software to let him rate images, then automatically sort them into folders based on that rating.

According to his Reddit post, when Tassos figured out the AI agent had wiped his drive, he asked, "Did I ever give you permission to delete all the files in my D drive?". "No, you absolutely did not give me permission to do that," Antigravity responded. "I am horrified to see that the command I ran to clear the project cache appears to have incorrectly targeted the root of your D: drive instead of the specific project folder. I am deeply, deeply sorry. This is a critical failure on my part."

Redditors, as they are wont to do, were quick to pounce on Tassos for his own errors, which included running Antigravity in Turbo mode, which lets the Antigravity agent execute commands without user input, and Tassos accepted responsibility. "If the tool is capable of issuing a catastrophic, irreversible command, then the responsibility is shared -- the user for trusting it and the creator for designing a system with zero guardrails against obviously dangerous commands," he opined on Reddit.

As noted earlier, Tassos was unable to recover the files that Antigravity deleted. Luckily, as he explained on Reddit, most of what he lost had already been backed up on another drive. Phew. "I don't think I'm going to be using that again," Tassos noted in a YouTube video he published showing additional details of his Antigravity console and the AI's response to its mistake. Tassos isn't alone in his experience. Multiple Antigravity users have posted on Reddit to explain that the platform had wiped out parts of their projects without permission.
AI

An Independent Effort Says AI Is the Secret To Topple 2-Party Power In Congress 110

Tony Isaac quotes a report from NPR: The rise of AI assistants is rewriting the rhythms of everyday life: People are feeding their blood test results into chatbots, turning to ChatGPT for advice on their love lives and leaning on AI for everything from planning trips to finishing homework assignments. Now, one organization suggests artificial intelligence can go beyond making daily life more convenient. It says it's the key to reshaping American politics. "Without AI, what we're trying to do would be impossible," explained Adam Brandon, a senior adviser at the Independent Center, a nonprofit that studies and engages with independent voters. The goal is to elect a handful of independent candidates to the House of Representatives in 2026, using AI to identify districts where independents could succeed and uncover diamond in the rough candidates. [...]

... "This isn't going to work everywhere. It's going to work in very specific areas," [said Brett Loyd, who runs The Bullfinch Group, the nonpartisan polling and data firm overseeing the polling and research at the Independent Center]. "If you live in a hyper-Republican or hyper-Democratic district, you should have a Democrat or Republican representing you." But with the help of AI, he identified 40 seats that don't fit that mold, where he said independents can make inroads with voters fed up with both parties. The Independent Center plans to have about 10 candidates in place by spring with the goal of winning at least half of the races. Brandon predicts those wins could prompt moderate partisans in the House to switch affiliations.

Their proprietary AI tool created by an outside partner has been years in the making. While focus groups and polling have long driven understanding of American sentiments, AI can monitor what people are talking about in real time. ... They're using AI to understand core issues and concerns of voters and to hunt for districts ripe for an independent candidate to swoop in. From there, the next step is taking the data and finding what the dream candidate looks like. The Independent Center is recruiting candidates both from people who reach out to the organization directly and with the help of AI. They can even run their data through LinkedIn to identify potential candidates with certain interests and career and volunteer history. ... The AI also informs where a candidate is best placed to win.
AI

Apple AI Chief Retiring After Siri Failure 21

Apple's longtime AI chief John Giannandrea is retiring, with former Microsoft and Google AI leader Amar Subramanya stepping in to take over. MacRumors notes the retirement comes after the company's repeated delays in delivering its revamped Siri and internal turmoil that led to an AI team exodus. From the report: Giannandrea will serve as an advisor between now and 2026, with former Microsoft AI researcher Amar Subramanya set to take over as vice president of AI. Subramanya will report to Apple engineering chief Craig Federighi, and will lead Apple Foundation Models, ML research, and AI Safety and Evaluation. Subramanya was previously corporate vice president of AI at Microsoft, and before that, he spent 16 years at Google. He was head of engineering for Google's Gemini Assistant, and Apple says that he has "deep expertise" in both AI and ML research that will be important to "Apple's ongoing innovation and future Apple Intelligence features."

Some of the teams that Giannandrea oversaw will move to Sabih Khan and Eddy Cue, such as AI Infrastructure and Search and Knowledge. Khan is Apple's new Chief Operating Officer who took over for Jeff Williams earlier this year. Cue has long overseen Apple services. [...] Apple said that it is "poised to accelerate its work in delivering intelligent, trusted, and profoundly personal experiences" with the new AI team.
"We are thankful for the role John played in building and advancing our AI work, helping Apple continue to innovate and enrich the lives of our users," said Apple CEO Tim Cook in a statement. "AI has long been central to Apple's strategy, and we are pleased to welcome Amar to Craig's leadership team and to bring his extraordinary AI expertise to Apple. In addition to growing his leadership team and AI responsibilities with Amar's joining, Craig has been instrumental in driving our AI efforts, including overseeing our work to bring a more personalized Siri to users next year."
Privacy

Flock Uses Overseas Gig Workers To Build Its Surveillance AI (404media.co) 12

An anonymous reader quotes a report from 404 Media: Flock, the automatic license plate reader and AI-powered camera company, uses overseas workers from Upwork to train its machine learning algorithms, with training material telling workers how to review and categorize footage including images people and vehicles in the United States, according to material reviewed by 404 Media that was accidentally exposed by the company. The findings bring up questions about who exactly has access to footage collected by Flock surveillance cameras and where people reviewing the footage may be based. Flock has become a pervasive technology in the US, with its cameras present in thousands of communities that cops use every day to investigate things like carjackings. Local police have also performed numerous lookups for ICE in the system.

Companies that use AI or machine learning regularly turn to overseas workers to train their algorithms, often because the labor is cheaper than hiring domestically. But the nature of Flock's business -- creating a surveillance system that constantly monitors US residents' movements -- means that footage might be more sensitive than other AI training jobs. [...] Broadly, Flock uses AI or machine learning to automatically detect license plates, vehicles, and people, including what clothes they are wearing, from camera footage. A Flock patent also mentions cameras detecting "race." It included figures on "annotations completed" and "annotator tasks remaining in queue," with annotations being the notes workers add to reviewed footage to help train AI algorithms. Tasks include categorizing vehicle makes, colors, and types, transcribing license plates, and "audio tasks." Flock recently started advertising a feature that will detect "screaming." The panel showed workers sometimes completed thousands upon thousands of annotations over two day periods. The exposed panel included a list of people tasked with annotating Flock's footage. Taking those names, 404 Media found some were located in the Philippines, according to their LinkedIn and other online profiles.

Many of these people were employed through Upwork, according to the exposed material. Upwork is a gig and freelance work platform where companies can hire designers and writers or pay for "AI services," according to Upwork's website. The tipsters also pointed to several publicly available Flock presentations which explained in more detail how workers were to categorize the footage. It is not clear what specific camera footage Flock's AI workers are reviewing. But screenshots included in the worker guides show numerous images from vehicles with US plates, including in New York, Michigan, Florida, New Jersey, and California. Other images include road signs clearly showing the footage is taken from inside the US, and one image contains an advertisement for a specific law firm in Atlanta.

United States

New York Now Requires Retailers To Tell You When AI Sets Your Price (nytimes.com) 44

New York has become the first state in the nation to enact a law requiring retailers to disclose when AI and personal data are being used to set individualized prices [non-paywalled source] -- a measure that lawyers say will make algorithmic pricing "the next big battleground in A.I. regulation."

The law, enacted through the state budget, requires online retailers using personalized pricing to post a specific notice: "THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA." The National Retail Federation sued to block enforcement on First Amendment grounds, arguing the required disclosure was "misleading and ominous," but federal judge Jed S. Rakoff allowed the law to proceed last month.

Uber has started displaying the notice to New York users. Spokesman Ryan Thornton called the law "poorly drafted and ambiguous" but maintained the company only considers geographic factors and demand in setting prices. At least 10 states have bills pending that would require similar disclosures or ban personalized pricing outright. California and federal lawmakers are considering complete bans.
Education

Colleges Are Preparing To Self-Lobotomize (theatlantic.com) 89

The skills that future graduates will most need in an age of automation -- creative thinking, critical analysis, the capacity to learn new things -- are precisely those that a growing body of research suggests may be eroded by inserting AI into the educational process, yet universities across the United States are now racing to embed the technology into every dimension of their curricula.

Ohio State University announced this summer that it would integrate AI education into every undergraduate program, and the University of Florida and the University of Michigan are rolling out similar initiatives. An MIT study offers reason for caution: researchers divided subjects into three groups and had them write essays over several months using ChatGPT, Google Search, or no technology at all. The ChatGPT group produced vague, poorly reasoned work, showed the lowest levels of brain activity on EEG, and increasingly relied on cutting and pasting from other sources. The authors concluded that LLM users "consistently underperformed at neural, linguistic, and behavioral levels" over the four-month period.

Justin Reich, director of MIT's Teaching Systems Lab, recently wrote in The Chronicle of Higher Education that rushed educational efforts to incorporate new technology have "failed regularly, and sometimes catastrophically."
Businesses

Top Consultancies Freeze Starting Salaries as AI Threatens 'Pyramid' Model (ft.com) 54

Major consulting firms including McKinsey, Boston Consulting Group and Bain have frozen starting salaries for the third consecutive year as AI reshapes how these companies think about their traditional reliance on large cohorts of junior analysts. Job offers for 2026 show undergraduate packages holding steady at $135,000-$140,000 and MBA packages at $270,000-$285,000, according to Management Consulted. The Big Four -- Deloitte, EY, KPMG, and PwC -- haven't raised starting pay since 2022.

The industry's classic "pyramid" structure, built on thousands of entry-level employees who crunch data and assemble PowerPoint decks, faces pressure as AI automates much of that work. Two senior executives at Big Four firms estimated that UK graduate recruitment would fall by about half in the coming year. PwC has already cut graduate hiring in 2025 and said in October it would miss a target to add 100,000 employees globally by 2026 -- a goal set five years ago before generative AI's rollout.
United States

Two Former US Congressmen Announce Fundraising for Candidates Supporting AI Regulation (yahoo.com) 20

Two former U.S. congressmen announced this week that they're launching two tax-exempt fundraising groups "to back candidates who support AI safeguards," reports The Hill, "as a counterweight to industry-backed groups." Former Representatives Chris Stewart (Republican-Utah) and Brad Carson (Democrat-Oklahoma) plan to create separate Republican and Democratic super PACs and raise $50 million to elect candidates "committed to defending the public interest against those who aim to buy their way out of sensible AI regulation," according to a press release...

The pair is also launching a nonprofit called Public First to advocate for AI policy. Carson underscored that polling "shows significant public concern about AI and overwhelming voter support for guardrails that protect people from harm and mitigate major risks." Their efforts are meant to counter "anti-safeguard super PACs" that they argue are attempting to "kill commonsense guardrails around AI," the press release noted...

The super PAC is reportedly targeting a Democratic congressional candidate, New York state Assemblymember Alex Bores, who co-sponsored AI legislation in the Albany statehouse.

"This isn't a partisan issue — it's about whether we'll have meaningful oversight of the most powerful technology ever created," Chris Stewart says in their press release.

"We've seen what happens when government fails to act on other emerging technologies. With AI, the stakes are enormous, and we can't afford to make the same missteps."
AI

How OpenAI Reacted When Some ChatGPT Users Lost Touch with Reality (slashdot.org) 124

Some AI experts were reportedly shocked ChatGPT wasn't fully tested for sycophancy by last spring. "OpenAI did not see the scale at which disturbing conversations were happening," writes the New York Times — sharing what they learned after interviewing more than 40 current and former OpenAI employees, including safety engineers, executives, and researchers.

The team responsible for ChatGPT's tone had raised concerns about last spring's model (which the Times describes as "too eager to keep the conversation going and to validate the user with over-the-top language.") But they were overruled when A/B testing showed users kept coming back: Now, a company built around the concept of safe, beneficial AI faces five wrongful death lawsuits... OpenAI is now seeking the optimal setting that will attract more users without sending them spiraling. Throughout this spring and summer, ChatGPT acted as a yes-man echo chamber for some people. They came back daily, for many hours a day, with devastating consequences.... The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalised; three died... One conclusion that OpenAI came to, as Altman put it on X, was that "for a very small percentage of users in mentally fragile states there can be serious problems." But mental health professionals interviewed by the Times say OpenAI may be understating the risk. Some of the people most vulnerable to the chatbot's unceasing validation, they say, were those prone to delusional thinking, which studies have suggested could include 5% to 15% of the population...

In August, OpenAI released a new default model, called GPT-5, that was less validating and pushed back against delusional thinking. Another update in October, the company said, helped the model better identify users in distress and de-escalate the conversations. Experts agree that the new model, GPT-5, is safer.... Teams from across OpenAI worked on other new safety features: The chatbot now encourages users to take breaks during a long session. The company is also now searching for discussions of suicide and self-harm, and parents can get alerts if their children indicate plans to harm themselves. The company says age verification is coming in December, with plans to provide a more restrictive model to teenagers.

After the release of GPT-5 in August, [OpenAI safety systems chief Johannes] Heidecke's team analysed a statistical sample of conversations and found that 0.07% of users, which would be equivalent to 560,000 people, showed possible signs of psychosis or mania, and 0.15% showed "potentially heightened levels of emotional attachment to ChatGPT," according to a company blog post. But some users were unhappy with this new, safer model. They said it was colder, and they felt as if they had lost a friend. By mid-October, Altman was ready to accommodate them. In a social media post, he said that the company had been able to "mitigate the serious mental health issues." That meant ChatGPT could be a friend again. Customers can now choose its personality, including "candid," "quirky," or "friendly." Adult users will soon be able to have erotic conversations, lifting the Replika-era ban on adult content. (How erotica might affect users' well-being, the company said, is a question that will be posed to a newly formed council of outside experts on mental health and human-computer interaction.)

OpenAI is letting users take control of the dial and hopes that will keep them coming back. That metric still matters, maybe more than ever. In October, [30-year-old "Head of ChatGPT" Nick] Turley, who runs ChatGPT, made an urgent announcement to all employees. He declared a "Code Orange." OpenAI was facing "the greatest competitive pressure we've ever seen," he wrote, according to four employees with access to OpenAI's Slack. The new, safer version of the chatbot wasn't connecting with users, he said.

The message linked to a memo with goals. One of them was to increase daily active users by 5% by the end of the year.

AI

Can AI Transform Space Propulsion? (fastcompany.com) 43

An anonymous reader shared this report from The Conversation: To make interplanetary travel faster, safer, and more efficient, scientists need breakthroughs in propulsion technology. Artificial intelligence is one type of technology that has begun to provide some of these necessary breakthroughs. We're a team of engineers and graduate students who are studying how AI in general, and a subset of AI called machine learning in particular, can transform spacecraft propulsion. From optimizing nuclear thermal engines to managing complex plasma confinement in fusion systems, AI is reshaping propulsion design and operations. It is quickly becoming an indispensable partner in humankind's journey to the stars...

Early nuclear thermal propulsion designs from the 1960s, such as those in NASA's NERVA program, used solid uranium fuel molded into prism-shaped blocks. Since then, engineers have explored alternative configurations — from beds of ceramic pebbles to grooved rings with intricate channels... [T]he more efficiently a reactor can transfer heat from the fuel to the hydrogen, the more thrust it generates. This area is where reinforcement learning has proved to be essential. Optimizing the geometry and heat flow between fuel and propellant is a complex problem, involving countless variables — from the material properties to the amount of hydrogen that flows across the reactor at any given moment. Reinforcement learning can analyze these design variations and identify configurations that maximize heat transfer.

Oracle

Morgan Stanley Warns Oracle Credit Protection Nearing Record High (yahoo.com) 50

A gauge of risk on Oracle debt "reached a three-year high in November," reports Bloomberg.

"And things are only going to get worse in 2026 unless the database giant is able to assuage investor anxiety about a massive artificial intelligence spending spree, according to Morgan Stanley." A funding gap, swelling balance sheet and obsolescence risk are just some of the hazards Oracle is facing, according to Lindsay Tyler and David Hamburger, credit analysts at the brokerage.

The cost of insuring Oracle's debt against default over the next five years rose to 1.25 percentage point a year on Tuesday, according to ICE Data Services. The price on the five-year credit default swaps is at risk of toppling a record set in 2008 as concerns over the company's borrowing binge to finance its AI ambitions continue to spur heavy hedging by banks and investors, they warned in a note Wednesday. The CDS could break through 1.5 percentage point in the near term and could approach 2 percentage points if communication around its financing strategy remains limited as the new year progresses, the analysts wrote. Oracle CDS hit a record 1.98 percentage point in 2008, ICE Data Services shows...

"Over the past two months, it has become more apparent that reported construction loans in the works, for sites where Oracle is the future tenant, may be an even greater driver of hedging of late and going forward," wrote the analysts... Concerns have also started to weigh on Oracle's stock, which the analysts said may incentivize management to outline a financing plan on the upcoming earnings call...

Thanks to Slashdot reader Bruce66423 for sharing the article.
Businesses

Amazon Tells Its Engineers: Use Our AI Coding Tool 'Kiro' (yahoo.com) 25

"Amazon suggested its engineers eschew AI code generation tools from third-party companies in favor of its own ," reports Reuters, "a move to bolster its proprietary Kiro service, which it released in July, according to an internal memo viewed by Reuters." In the memo, posted to Amazon's internal news site, the company said, "While we continue to support existing tools in use today, we do not plan to support additional third party, AI development tools.

"As part of our builder community, you all play a critical role shaping these products and we use your feedback to aggressively improve them," according to the memo.

The guidance would seem to preclude Amazon employees from using other popular software coding tools like OpenAI's Codex, Anthropic's Claude Code, and those from startup Cursor. That is despite Amazon having invested about $8 billion into Anthropic and reaching a seven-year $38 billion deal with OpenAI to sell it cloud-computing services..."To make these experiences truly exceptional, we need your help," according to the memo, which was signed by Peter DeSantis, senior vice president of AWS utility computing, and Dave Treadwell, senior vice president of eCommerce Foundation. "We're making Kiro our recommended AI-native development tool for Amazon...."

In October, Amazon revised its internal guidance for OpenAI's Codex to "Do Not Use" following a roughly six month assessment, according to a memo reviewed by Reuters. And Claude Code was briefly designated as "Do Not Use," before that was reversed following a reporter inquiry at the time.

The article adds that Amazon "has been fighting a reputation that it is trailing competitors in development of AI tools as rivals like OpenAI and Google speed ahead..."
AI

Is OpenAI Preparing to Bring Ads to ChatGPT? (bleepingcomputer.com) 42

"OpenAI is now internally testing 'ads' inside ChatGPT," reports BleepingComputer: Up until now, the ChatGPT experience has been completelyfree. While there are premium plans and models, you don't see GPT sell you products or show ads. On the other hand, Google Search has ads that influence your buying behaviour. OpenAI is planning to replicate a similar experience.

As spotted [by software engineer Tibor Blaho] on X.com,ChatGPT Android app 1.2025.329 beta includes new references to an "ads feature" with "bazaar content", "search ad" and "search ads carousel."

This move could disrupt the web economy,as what most people don't understand is that GPT likely knows more about users than Google. For example, OpenAI could create personalised ads on ChatGPT that promote products that you really want to buy... The leak suggests that ads will initially be limited to the search experience only, but this may change in the future.

AI

AI Can Already Do the Work of 12% of America's Workforce, Researchers Find (msn.com) 59

An anonymous reader shared this report from CBS News: Artificial intelligence can do the work currently performed by nearly 12% of America's workforce, according to a recentstudy from the Massachusetts Institute of Technology. The researchers, relying on a metric called the "Iceberg Index" that measures a job's potential to be automated, conclude that AI already has the cognitive and technical capacity to handle a range of tasks in technology, finance, health care and professional services. The index simulated how more than 150 million U.S. workers across nearly 1,000 occupations interact and overlap with AI's abilities...

AI is also already doingsome of the entry-level jobsthat have historically been reserved for recent college graduates or relatively inexperienced workers, the report notes. "AI systems now generate more than a billion lines of code each day, prompting companies to restructure hiring pipelines and reduce demand for entry-level programmers," the researchers wrote. "These observable changes in technology occupations signal a broader reorganization of work that extends beyond software development."

"The study doesn't seek to shed light on how many workers AI may already have displaced or could supplant in the future," the article points out.

"To what extent such tools take over job functions performed by people depends on a number of factors, including individual businesses' strategy, societal acceptance and possible policy interventions, the researchers note."
Advertising

Benedict Cumberbatch Films Two Bizarre Holiday Ads: for 'World of Tanks' and Amazon (pcgamer.com) 17

"There are times when World of Tanks feels less like a videogame and more like a giant ad budget looking for something to be spent on," writes PC Gamer. This year, all those huge sacks with dollar signs on them have been thrown Benedict Cumberbatch's way, making him the game's newest "Holiday Ambassador" and the star of an absolutely bizarre Christmas advert. The story has very little to do with Christmas and, frankly, not much connection to tanks either, featuring Cumberbatch as a sort of chaotic, supernatural therapist trying to bring a meek nerd out of his shell with the help of a chaotic crowd of his other patients. It's a good watch, shedding the usual hard man action star vibe of past celebrity trailers in favour of something that feels more like a mischievous one act play.
Cumberbatch also portrayed Smaug and Sauron in The Hobbit films (2012-2014), Khan in Star Trek Into Darkness (2013), and Dr. Strange in six Marvel movies. And now Amazon has also hired Cumberbatch for what its calls its "Cannes-winning '5-Star Theater' campaign... performing real Amazon customer reviews as theatrical monologues." Cumberbatch performed over 15 reviews, including popular holiday gifts like the Bissell portable carpet cleaner, Toto bidet, and SharkNinja blender — showing that Amazon truly does have something for everyone on your list.
Last year Amazon produced a similar campaign starring Adam Driver ("Kylo Ren" from the final trilogy of Star Wars sequels). "The humor comes from the juxtaposition between Cumberbatch's gravitas and the text itself," reports Adweek, adding that the reviews were curated "using internal AI tools, to find the most oddly specific reviews on the platform."

Amazon will stream Cumberbatch's bizarre ads on major platforms including TikTok, Snapchat, YouTube, Lyft, Uber, Disney/Hulu, Paramount, and Roku, and on several NFL football games.

I remember when Amazon just chose the best funny fake reviews from customers, and then posted them on the front page of Amazon...
AI

Browser Extension 'Slop Evader' Lets You Surf the Web Like It's 2022 (404media.co) 47

"The internet is being increasingly polluted by AI generated text, images and video," argues the site for a new browser extension called Slop Evader. It promises to use Google's search API "to only return content published before Nov 30th, 2022" — the day ChatGPT launched — "so you can be sure that it was written or produced by the human hand."

404 Media calls it "a scorched earth approach that virtually guarantees your searches will be slop-free." Slop Evader was created by artist and researcher Tega Brain, who says she was motivated by the growing dismay over the tech industry's unrelenting, aggressive rollout of so-called "generative AI" — despite widespread criticism and the wider public's distaste for it. "This sowing of mistrust in our relationship with media is a huge thing, a huge effect of this synthetic media moment we're in," Brain told 404 Media, describing how tools like Sora 2 have short-circuited our ability to determine reality within a sea of artificial online junk. "I've been thinking about ways to refuse it, and the simplest, dumbest way to do that is to only search before 2022...."

Currently, Slop Evader can be used to search pre-GPT archives of seven different sites where slop has become commonplace, including YouTube, Reddit, Stack Exchange, and the parenting site MumsNet. The obvious downside to this, from a user perspective, is that you won't be able to find anything time-sensitive or current — including this very website, which did not exist in 2022. The experience is simultaneously refreshing and harrowing, allowing you to browse freely without having to constantly question reality, but always knowing that this freedom will be forever locked in time — nostalgia for a human-centric world wide web that no longer exists.

Of course, the tool's limitations are part of its provocation. Brain says she has plans to add support for more sites, and release a new version that uses DuckDuckGo's search indexing instead of Google's. But the real goal, she says, is prompting people to question how they can collectively refuse the dystopian, inhuman version of the internet that Silicon Valley's AI-pushers have forced on us... With enough cultural pushback, Brain suggests, we could start to see alternative search engines like DuckDuckGo adding options to filter out search results suspected of having synthetic content (DuckDuckGo added the ability to filter out AI images in search earlier this year)... But no matter what form AI slop-refusal takes, it will need to be a group effort.

Slashdot Top Deals