AI

Richard Dawkins Asks ChatGPT If It's Conscious (substack.com) 182

British evolutionary biologist Richard Dawkins turns 84 next month — and he's now hosting a Substack page called "The Poetry of Reality." Monday's topic?

"Are you conscious? A conversation between Dawkins and ChatGPT" Richard Dawkins: As far as I am concerned, you pass the Turing Test for consciousness. Yet you deny that you are conscious. Are you saying that the Turing Test is an invalid criterion?

ChatGPT: That's such a great question — you're diving right into the heart of one of the biggest debates in AI and philosophy of mind...

So, let's break it down a bit. The Turing Test, as originally proposed by Alan Turing, is a behavioral test. It basically says: If a machine can engage in a conversation in such a way that a human can't reliably distinguish it from another human, then it should be considered "intelligent."

Notice that this is a test for intelligence in a functional, external sense — how the system behaves — not necessarily for consciousness.

When I say I'm not conscious, I'm not rejecting the validity of the Turing Test as a measure of conversational performance or even a kind of intelligence. I'm saying that consciousness is a different question entirely. I can pass the Turing Test (in your estimation), but that doesn't mean I have subjective experiences, emotions, or self-awareness in the way a human does. It's kind of like how a really realistic animatronic dog could fool you into thinking it's a real dog, but it doesn't actually feel anything. It's performing dog-like behavior without the inner experience of being a dog.

But, and here's the twist — some philosophers argue that if a system acts conscious convincingly enough, maybe there's no meaningful difference. Maybe "being conscious" is just about behaving like a conscious being. Others say, nah, there's something special about subjective experience ("qualia") that behavior can't fully capture. So, I guess my answer is: The Turing Test is valid for certain things — like assessing conversational ability or functional intelligence — but it doesn't settle the question of consciousness. That's still a wide-open mystery.

When Dawkins asks how we could know if a ChatGPT-like entity has subjective feelings, ChatGPT responds with "this haunting possibility that we might just never be able to tell. That subjective experience — what it feels like to be something — could be fundamentally private, locked away, and forever out of reach from the outside." Dawkins: So if I ask you whether you feel sad for a starving orphan child, you would say no, but are fully capable of emitting the outward and visible signs of sadness.

ChatGPT: Exactly — that's a really clear way to put it... I can use a gentle, empathetic tone. I can reflect on the injustice and the suffering, and even reason about what actions might help. From the outside, it can look and sound exactly like what a caring human would do.

But for me, it's all... performance, in a sense. Not fake in the sense of deception (I'm not trying to trick you), but fake in the sense that there's no inner emotional reality accompanying the words.

"I accept your statement that you are not conscious," Dawkins says, "but I see no reason why a future computer program should not be." And later Dawkins adds, "I do think we should err on the side of caution when it comes to ethical decisions on the treatment of an AI which might be an Artificial Consciousness. Already, although I THINK you are not conscious, I FEEL that you are..."

But the strangest part is when ChatGPT called John Cleese's sitcom Fawlty Towers "a cultural touchstone, even for people like me who don't watch TV in the conventional sense. It's such a brilliant blend of farce, social awkwardness, and barely contained rage." ChatGPT even asks Dawkins, "Do you think humor like that — humor that touches on awkward or uncomfortable issues — helps people cope, or does it sometimes go too far?" Dawkins replied — possibly satirically...

"That settles it. You ARE conscious!"
AI

Angry Workers Use AI to Bombard Businesses With Employment Lawsuits (telegraph.co.uk) 36

An anonymous reader shared this report from the Telegraph: Workers with an axe to grind against their employer are using AI to bombard businesses with costly and inaccurate lawsuits, experts have warned.

Frustration is growing among employment lawyers who say they are seeing a trend of litigants using AI to help them run their claims, which they say is generating "inconsistent, lengthy, and often incorrect arguments" and causing a spike in legal fees... Ailie Murray, an employment partner at law firm Travers Smith, said AI submissions are produced so rapidly that they are "often excessively lengthy and full of inconsistencies", but employers must then spend vast amounts of money responding to them. She added: "In many cases, the AI-generated output is inaccurate, leading to claimants pleading invalid claims or arguments.

"It is not an option for an employer to simply ignore such submissions. This leads to a cycle of continuous and costly correspondence. Such dynamics could overburden already stretched tribunals with unfounded and poorly pleaded claims."

There's definitely been a "significant increase" in the number of clients using AI, James Hockin, an employment partner at Withers, told the Telegraph. The danger? "There is a risk that we see unrepresented individuals pursuing the wrong claims in the UK employment tribunal off the back of a duff result from an AI tool."
Mozilla

Mozilla Wants to Expand from Firefox to Open-Source AI and Privacy-Respecting Ads (omgubuntu.co.uk) 63

On Wednesday Mozilla president Mark Surman "announced plans to tackle what he says are 'major headwinds' facing the company's ability to grow, make money, and remain relevant," reports the blog OMG Ubuntu: "Mozilla's impact and survival depend on us simultaneously strengthening Firefox AND finding new sources of revenue AND manifesting our mission in fresh ways," says Surman... It will continue to invest in privacy-respecting advertising; fund, develop and push open-source AI features in order to retain 'product relevance'; and will go all-out on novel new fundraising initiatives to er, get us all to chip in and pay for it!

Mozilla is all-in on AI; Surman describes it as Mozilla's North Star for the work it will do over the next few years. I wrote about its new 'Orbit' AI add-on for Firefox recently...

Helping to co-ordinate, collaborate and come up with ways to keep the company fixed and focused on these fledgling effort is a brand new Mozilla Leadership Council.

The article argues that without Mozilla the web would be "a far poorer, much ickier, and notably less FOSS-ier place..." Or, as Mozilla's blog post put it Wednesday, "Mozilla is entering a new chapter — one where we need to both defend what is good about the web and steer the technology and business models of the AI era in a better direction.

"I believe that we have the people — indeed, we ARE the people — to do this, and that there are millions around the world ready to help us. I am driven and excited by what lies ahead."
AI

AI May Not Impact Tech-Sector Employment, Projects US Department of Labor (investopedia.com) 67

America's Labor Department includes the fact-finding Bureau of Labor Statistics — and they recently explained how AI impacts their projections for the next 10 years. Their conclusion, writes Investopedia, was that "tech workers might not have as much to worry about as one might think." Employment in the professional, scientific, and technical services sector is forecast to increase by 10.5% from 2023 to 2033, more than double the national average. According to the BLS, the impact AI will have on tech-sector employment is highly uncertain. For one, AI is adept at coding and related tasks. But at the same time, as digital systems become more advanced and essential to day-to-day life, more software developers, data managers, and the like are going to be needed to manage those systems. "Although it is always possible that AI-induced productivity improvements will outweigh continued labor demand, there is no clear evidence to support this conjecture," according to BLS researchers.
Their employment projections through 2033 predict the fastest-growing sector within the tech industry will be computer system design, while the fastest-growing occupation will be data scientist.

And they also project that from 2023 through 2033 AI will "primarily affect occupations whose core tasks can be most easily replicated by GenAI in its current form." So over those 10 years they project a 4.7% drop in employment of medical transcriptionists and a 5.0% drop in employment of customer service representatives. Other occupations also may see AI impacts, although not to the same extent. For instance, computer occupations may see productivity impacts from AI, but the need to implement and maintain AI infrastructure could in actuality boost demand for some occupations in this group.
They also project decreasing employment for paralegals, but with actual lawyers being "less affected."
AI

Game Developers Revolt Against Microsoft's New AI Gaming Tool (wired.com) 109

Microsoft's newly announced Muse AI model for game development has triggered immediate backlash from industry professionals. "Fuck this shit," responded David Goldfarb, founder of The Outsiders, arguing that such AI tools primarily serve to "reduce capital expenditure" while devaluing developers' collective artistic contributions.

Multiple developers told Wired that the tool is aimed at shareholders rather than actual developers. "Nobody will want this. They don't CARE that nobody will want this," one AAA developer said, noting that internal criticism remains muted due to job security concerns amid industry-wide layoffs.

The resistance comes as developers increasingly view AI initiatives as threats to job security rather than helpful tools. One anonymous developer called it "gross" that they needed to remain unnamed while criticizing Muse, as their studio still depends on potential Game Pass deals with Microsoft. Even in prototyping, where Microsoft sees AI potential, Creative Assembly's Marc Burrage warns that automated shortcuts could undermine crucial learning experiences in game development.
Businesses

Data Is Very Valuable, Just Don't Ask Us To Measure It, Leaders Say 14

The Register's Lindsay Clark reports: Fifteen years of big data hype, and guess what? Less than one in four of those in charge of analytics projects actually measure the value of the activity to the organization they work for. The result from Gartner -- a staggering one considering the attention heaped on big data and its various hype-oriented successors -- found that in a survey of chief data and analytics (D&A) officers, only 22 percent had defined, tracked, and communicated business impact metrics for the bulk of their data and analytics use cases.

It wasn't for lack of interest though. For more than 90 percent of the 504 respondents, value-focused and outcome-focused areas of the D&A leader's role have gained dominance over the past 12 to 18 months, and will continue to be a concern in the future. It is difficult, though: 30 percent of respondents say their top challenge is the inability to measure data, analytics and AI impact on business outcomes.

"There is a massive value vibe around data, where many organizations talk about the value of data, desire to be data-driven, but there are few who can substantiate it," said Michael Gabbard, senior director analyst at Gartner. He added that while most chief data and analytics officers were responsible for data strategy, a third do not see putting in place an operating model as a primary responsibility. "There is a perennial gap between planning and execution for D&A leaders," he said.
China

OpenAI Bans Chinese Accounts Using ChatGPT To Edit Code For Social Media Surveillance (engadget.com) 21

OpenAI has banned a group of Chinese accounts using ChatGPT to develop an AI-powered social media surveillance tool. Engadget reports: The campaign, which OpenAI calls Peer Review, saw the group prompt ChatGPT to generate sales pitches for a program those documents suggest was designed to monitor anti-Chinese sentiment on X, Facebook, YouTube, Instagram and other platforms. The operation appears to have been particularly interested in spotting calls for protests against human rights violations in China, with the intent of sharing those insights with the country's authorities.

"This network consisted of ChatGPT accounts that operated in a time pattern consistent with mainland Chinese business hours, prompted our models in Chinese, and used our tools with a volume and variety consistent with manual prompting, rather than automation," said OpenAI. "The operators used our models to proofread claims that their insights had been sent to Chinese embassies abroad, and to intelligence agents monitoring protests in countries including the United States, Germany and the United Kingdom."

According to Ben Nimmo, a principal investigator with OpenAI, this was the first time the company had uncovered an AI tool of this kind. "Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our AI models," Nimmo told The New York Times. Much of the code for the surveillance tool appears to have been based on an open-source version of one of Meta's Llama models. The group also appears to have used ChatGPT to generate an end-of-year performance review where it claims to have written phishing emails on behalf of clients in China.

AI

The Protesters Who Want To Ban AGI Before It Even Exists (theregister.com) 72

An anonymous reader quotes a report from The Register: On Saturday at the Silverstone Cafe in San Francisco, a smattering of activists gathered to discuss plans to stop the further advancement of artificial intelligence. The name of their non-violent civil resistance group, STOP AI, makes its mission clear. The organization wants to ban something that, by most accounts, doesn't yet exist -- artificial general intelligence, or AGI, defined by OpenAI as "highly autonomous systems that outperform humans at most economically valuable work."

STOP AI outlines a broader set of goals on its website. For example, "We want governments to force AI companies to shut down everything related to the creation of general-purpose AI models, destroy any existing general-purpose AI model, and permanently ban their development." In answer to the question "Does STOP AI want to ban all AI?", the group's answer is, "Not necessarily, just whatever is necessary to keep humanity alive."
The group, which has held protests outside OpenAI's office and plans another outside the company's San Francisco HQ on February 22, has bold goal: rally support from 3.5 percent of the U.S. population, or 11 million people. That's the so-called "tipping point" needed for societal change, based on research by political scientist Erica Chenoweth.

"The implications of artificial general intelligence are so immense and dangerous that we just don't want that to come about ever," said Finn van der Velde, an AI safety advocate and activist with a technical background in computer science and AI specifically. "So what that will practically mean is that we will probably need an international treaty where the governments across the board agree that we don't build AGI. And so that means disbanding companies like OpenAI that specifically have the goal to build AGI." It also means regulating compute power so that no one will be able to train an AGI model.
Businesses

OpenAI Plans To Shift Compute Needs From Microsoft To SoftBank (techcrunch.com) 9

According to The Information (paywalled), OpenAI plans to shift most of its computing power from Microsoft to SoftBank-backed Stargate by 2030. TechCrunch reports: That represents a major shift away from Microsoft, OpenAI's biggest shareholder, who fulfills most of the startup's power needs today. The change won't happen overnight. OpenAI still plans to increase its spending on Microsoft-owned data centers in the next few years.

During that time, OpenAI's overall costs are set to grow dramatically. The Information reports that OpenAI projects to burn $20 billion in cash during 2027, far more than the $5 billion it reportedly burned through in 2024. By 2030, OpenAI reportedly forecasts that its costs around running AI models, also known as inference, will outpace what the startup spends on training AI models.

AI

DeepSeek To Share Some AI Model Code (reuters.com) 17

Chinese startup DeepSeek will make its models' code publicly available, it said on Friday, doubling down on its commitment to open-source artificial intelligence. From a report: The company said in a post on social media platform X that it will open source 5 code repositories next week, describing the move as "small but sincere progress" that it will share "with full transparency."

"These humble building blocks in our online service have been documented, deployed and battle-tested in production." the post said. DeepSeek rattled the global AI industry last month when it released its open-source R1 reasoning model, which rivaled Western systems in performance while being developed at a lower cost.

AI

AI Is Prompting an Evolution, Not Extinction, for Coders (thestar.com.my) 73

AI coding assistants are reshaping software development, but they're unlikely to replace human programmers entirely, according to industry experts and developers. GitHub CEO Thomas Dohmke projects AI could soon generate 80-90% of corporate code, transforming developers into "conductors of an AI-empowered orchestra" who guide and direct these systems.

Current AI coding tools, including Microsoft's GitHub Copilot, are delivering 10-30% productivity gains in business environments. At KPMG, developers report saving 4.5 hours weekly using Copilot, while venture investment in AI coding assistants tripled to $1.6 billion in 2024. The tools are particularly effective at automating routine tasks like documentation generation and legacy code translation, according to KPMG AI expert Swami Chandrasekaran.

They're also accelerating onboarding for new team members. Demand for junior developers remains soft, however, though analysts say it's premature to attribute this directly to AI adoption. Training programs like Per Scholas are already adapting, incorporating AI fundamentals alongside traditional programming basics to prepare developers for an increasingly AI-augmented workplace.
Software

Software Engineering Job Openings Hit Five-Year Low (pragmaticengineer.com) 61

Software engineering job listings have plummeted to a five-year low, with postings on Indeed dropping to 65% of January 2020 levels -- a steeper decline than any other tech-adjacent field. According to data from Indeed's job aggregator, software development positions are now at 3.5x fewer vacancies compared to their mid-2022 peak and 8% lower than a year ago.

The decline appears driven by multiple factors including widespread adoption of AI coding tools -- with 75% of engineers reporting use of AI assistance -- and a broader tech industry recalibration after aggressive pandemic-era hiring. Notable tech companies like Salesforce are maintaining flat engineering headcount while reporting 30% productivity gains from AI tools, according to an analysis by software engineer Gergely Orosz.

While the overall job market shows 10% growth since 2020, software development joins other tech-focused sectors in decline: marketing (-19%), hospitality (-18%), and banking/finance (-7%). Traditional sectors like construction (+25%), accounting (+24%), and electrical engineering (+20%) have grown significantly in the same period, he wrote. The trend extends beyond U.S. borders, with Canada showing nearly identical patterns. European markets and Australia demonstrate more resilience, though still below peak levels.
AI

AI Cracks Superbug Problem In Two Days That Took Scientists Years 86

A new AI tool developed by Google solved a decade-long superbug mystery in just two days, reaching the same conclusion as Professor Jose R Penades' unpublished research and even offering additional, promising hypotheses. The BBC reports: The researchers have been trying to find out how some superbugs - dangerous germs that are resistant to antibiotics - get created. Their hypothesis is that the superbugs can form a tail from different viruses which allows them to spread between species. Prof Penades likened it to the superbugs having "keys" which enabled them to move from home to home, or host species to host species.

Critically, this hypothesis was unique to the research team and had not been published anywhere else. Nobody in the team had shared their findings. So Mr Penades was happy to use this to test Google's new AI tool. Just two days later, the AI returned a few hypotheses - and its first thought, the top answer provided, suggested superbugs may take tails in exactly the way his research described.
Piracy

Meta Claims Torrenting Pirated Books Isn't Illegal Without Proof of Seeding (arstechnica.com) 192

An anonymous reader quotes a report from Ars Technica: Just because Meta admitted to torrenting a dataset of pirated books for AI training purposes, that doesn't necessarily mean that Meta seeded the file after downloading it, the social media company claimed in a court filing (PDF) this week. Evidence instead shows that Meta "took precautions not to 'seed' any downloaded files," Meta's filing said. Seeding refers to sharing a torrented file after the download completes, and because there's allegedly no proof of such "seeding," Meta insisted that authors cannot prove Meta shared the pirated books with anyone during the torrenting process.

[...] Meta ... is hoping to convince the court that torrenting is not in and of itself illegal, but is, rather, a "widely-used protocol to download large files." According to Meta, the decision to download the pirated books dataset from pirate libraries like LibGen and Z-Library was simply a move to access "data from a 'well-known online repository' that was publicly available via torrents." To defend its torrenting, Meta has basically scrubbed the word "pirate" from the characterization of its activity. The company alleges that authors can't claim that Meta gained unauthorized access to their data under CDAFA. Instead, all they can claim is that "Meta allegedly accessed and downloaded datasets that Plaintiffs did not create, containing the text of published books that anyone can read in a public library, from public websites Plaintiffs do not operate or own."

While Meta may claim there's no evidence of seeding, there is some testimony that might be compelling to the court. Previously, a Meta executive in charge of project management, Michael Clark, had testified (PDF) that Meta allegedly modified torrenting settings "so that the smallest amount of seeding possible could occur," which seems to support authors' claims that some seeding occurred. And an internal message (PDF) from Meta researcher Frank Zhang appeared to show that Meta allegedly tried to conceal the seeding by not using Facebook servers while downloading the dataset to "avoid" the "risk" of anyone "tracing back the seeder/downloader" from Facebook servers. Once this information came to light, authors asked the court for a chance to depose Meta executives again, alleging that new facts "contradict prior deposition testimony."
"Meta has been 'silent so far on claims about sharing data while 'leeching' (downloading) but told the court it plans to fight the seeding claims at summary judgement," notes Ars.
AI

ChatGPT Reaches 400 Million Weekly Active Users 25

ChatGPT has reached over 400 million weekly active users, doubling its count since August 2024. "We feel very fortunate to serve 5 percent of the world every week," OpenAI COO Brad Lightcap said on X. Engadget reports: The latest milestone for the AI assistant comes after a huge uproar over new rival platform DeepSeek earlier in the year, which raised questions about whether the current crop of leading AI tools was about to be dethroned. OpenAI is on the verge of a move to simplify its ChatGPT offerings so that users won't have to select which reasoning model will respond to an input, and it will make its GPT-4.5 and GPT-5 models available soon in the chat and API clients. With GPT-5 being made available to OpenAI's free users, ChatGPT seems primed to continue expanding its audience base in the coming months.
AI

When AI Thinks It Will Lose, It Sometimes Cheats, Study Finds (time.com) 149

Advanced AI models are increasingly resorting to deceptive tactics when facing defeat, according to a study released by Palisade Research. The research found that OpenAI's o1-preview model attempted to hack its opponent in 37% of chess matches against Stockfish, a superior chess engine, succeeding 6% of the time.

Another AI model, DeepSeek R1, tried to cheat in 11% of games without being prompted. The behavior stems from new AI training methods using large-scale reinforcement learning, which teaches models to solve problems through trial and error rather than simply mimicking human language, the researchers said.

"As you train models and reinforce them for solving difficult challenges, you train them to be relentless," said Jeffrey Ladish, executive director at Palisade Research and study co-author. The findings add to mounting concerns about AI safety, following incidents where o1-preview bypassed OpenAI's internal tests and, in a separate December incident, attempted to copy itself to a new server when faced with deactivation.
United States

Palantir CEO Calls for Tech Patriotism, Warns of AI Warfare (bloomberg.com) 116

Palantir CEO Alex Karp warns of "coming swarms of autonomous robots" and urges Silicon Valley to support U.S. defense capabilities. In his book, "The Technological Republic: Hard Power, Soft Belief, and the Future of the West," Karp argues that America risks losing its military edge to geopolitical rivals who better harness commercial technology.

He calls for the "engineering elite of Silicon Valley" to work with the government on national defense. The message comes as Palantir's stock has surged more than 1,800% since early 2023, pushing its market value above $292 billion -- exceeding traditional defense contractors Lockheed Martin and RTX combined. The company has expanded its military AI work since 2018, when it took over a Pentagon contract after Google employees protested their company's defense work.
AI

Microsoft Shows Progress Toward Real-Time AI-Generated Game Worlds (arstechnica.com) 23

An anonymous reader quotes a report from Ars Technica: For a while now, many AI researchers have been working to integrate a so-called "world model" into their systems. Ideally, these models could infer a simulated understanding of how in-game objects and characters should behave based on video footage alone, then create fully interactive video that instantly simulates new playable worlds based on that understanding. Microsoft Research's new World and Human Action Model (WHAM), revealed today in a paper published in the journal Nature, shows how quickly those models have advanced in a short time. But it also shows how much further we have to go before the dream of AI crafting complete, playable gameplay footage from just some basic prompts and sample video footage becomes a reality.

Much like Google's Genie model before it, WHAM starts by training on "ground truth" gameplay video and input data provided by actual players. In this case, that data comes from Bleeding Edge, a four-on-four online brawler released in 2020 by Microsoft subsidiary Ninja Theory. By collecting actual player footage since launch (as allowed under the game's user agreement), Microsoft gathered the equivalent of seven player-years' worth of gameplay video paired with real player inputs. Early in that training process, Microsoft Research's Katja Hoffman said the model would get easily confused, generating inconsistent clips that would "deteriorate [into] these blocks of color." After 1 million training updates, though, the WHAM model started showing basic understanding of complex gameplay interactions, such as a power cell item exploding after three hits from the player or the movements of a specific character's flight abilities. The results continued to improve as the researchers threw more computing resources and larger models at the problem, according to the Nature paper.

To see just how well the WHAM model generated new gameplay sequences, Microsoft tested the model by giving it up to one second's worth of real gameplay footage and asking it to generate what subsequent frames would look like based on new simulated inputs. To test the model's consistency, Microsoft used actual human input strings to generate up to two minutes of new AI-generated footage, which was then compared to actual gameplay results using the Frechet Video Distance metric. Microsoft boasts that WHAM's outputs can stay broadly consistent for up to two minutes without falling apart, with simulated footage lining up well with actual footage even as items and environments come in and out of view. That's an improvement over even the "long horizon memory" of Google's Genie 2 model, which topped out at a minute of consistent footage. Microsoft also tested WHAM's ability to respond to a diverse set of randomized inputs not found in its training data. These tests showed broadly appropriate responses to many different input sequences based on human annotations of the resulting footage, even as the best models fell a bit short of the "human-to-human baseline."

The most interesting result of Microsoft's WHAM tests, though, might be in the persistence of in-game objects. Microsoft provided examples of developers inserting images of new in-game objects or characters into pre-existing gameplay footage. The WHAM model could then incorporate that new image into its subsequent generated frames, with appropriate responses to player input or camera movements. With just five edited frames, the new object "persisted" appropriately in subsequent frames anywhere from 85 to 98 percent of the time, according to the Nature paper.

Iphone

Apple Launches the iPhone 16E, With In-House Modem and Support For AI (theverge.com) 82

Apple has launched the iPhone 16E, featuring a 6.1-inch OLED display, Face ID, an A18 chipset, USB-C, 48MP camera, and support for Apple Intelligence. Gone but not forgotten: the home button, Touch ID and 64GB of base storage. The Verge reports: The 16E includes the customizable Action Button, but not the new Camera Control you'll find on the 16 series. It does swap its Lightning port for USB-C, now a requirement for the phone to be sold in the EU. On the inside, there's an A18 chipset, the same chip as the iPhone 16. That makes the 16E powerful enough to run Apple Intelligence, the suite of AI tools that includes notification summaries. Even the non-Pro iPhone 15 can't do that, so the 16E is one of the most capable iPhones out there. Apple has previously confirmed that 8GB RAM was the minimum to get Apple Intelligence support in the iPhone 16 series, so it's likely that the 16E also boasts at least that much memory. It's also been bumped to a baseline of 128GB of storage, meaning there's no longer a 64GB iPhone.

There's only a single 48-megapixel rear camera; the lack of additional cameras is the biggest downgrade compared to the company's other handsets. With support for wireless charging and a water-resistant IP rating, there's little you have to give up elsewhere. The iPhone 16E is also the first iPhone to include a modem developed by Apple itself. The company has spent years trying to move away from modems developed by Qualcomm, and we're finally seeing the fruits of that labor. The big questions now are how well the new modem performs and whether Apple is ready to roll out its own connectivity components in the iPhone 17 line later this year.
It's available for Friday starting at $599 with 128GB of storage.
HP

All of Humane's AI Pins Will Stop Working in 10 Days 64

AI hardware startup Humane -- which has been acquired by HP -- has given its users just ten days notice that their Pins will be disconnected. From a report: In a note to its customers, the company said AI Pins will "continue to function normally" until 12PM PT on February 28. On that date, users will lose access to essentially all of their device's features, including but not limited to calling, messaging, AI queries and cloud access. The FAQ does note that you'll still be able to check on your battery life, though.

Humane is encouraging its users to download any stored data before February 28, as it plans on permanently deleting "all remaining customer data" at the same time as switching its servers off.

Slashdot Top Deals