


Will Cryptomining Facilities Change Into AI Data Centers? (yahoo.com) 36

Google Stops Malicious Apps With 'AI-Powered Threat Detection' and Continuous Scanning (googleblog.com) 15
"As a result, we prevented 2.36 million policy-violating apps from being published on Google Play and banned more than 158,000 bad developer accounts that attempted to publish harmful apps. " To keep out bad actors, we have always used a combination of human security experts and the latest threat-detection technology. In 2024, we used Google's advanced AI to improve our systems' ability to proactively identify malware, enabling us to detect and block bad apps more effectively. It also helps us streamline review processes for developers with a proven track record of policy compliance. Today, over 92% of our human reviews for harmful apps are AI-assisted, allowing us to take quicker and more accurate action to help prevent harmful apps from becoming available on Google Play. That's enabled us to stop more bad apps than ever from reaching users through the Play Store, protecting users from harmful or malicious apps before they can cause any damage.
Starting in 2024 Google also "required apps to be more transparent about how they handle user information by launching new developer requirements and a new 'Data deletion' option for apps that support user accounts and data collection.... We're also constantly working to improve the safety of apps on Play at scale, such as with the Google Play SDK Index. This tool offers insights and data to help developers make more informed decisions about the safety of an SDK."
And once an app is installed, "Google Play Protect, Android's built-in security protection, helps to shield their Android device by continuously scanning for malicious app behavior." Google Play Protect automatically scans every app on Android devices with Google Play Services, no matter the download source. This built-in protection, enabled by default, provides crucial security against malware and unwanted software. Google Play Protect scans more than 200 billion apps daily and performs real-time scanning at the code-level on novel apps to combat emerging and hidden threats, like polymorphic malware. In 2024, Google Play Protect's real-time scanning identified more than 13 million new malicious apps from outside Google Play [based on Google Play Protect 2024 internal data]...
According to our research, more than 95 percent of app installations from major malware families that exploit sensitive permissions highly correlated to financial fraud came from Internet-sideloading sources like web browsers, messaging apps, or file managers. To help users stay protected when browsing the web, Chrome will now display a reminder notification to re-enable Google Play Protect if it has been turned off... Scammers may manipulate users into disabling Play Protect during calls to download malicious Internet-sideloaded apps. To prevent this, the Play Protect app scanning toggle is now temporarily disabled during phone or video calls...
Google Play Protect's enhanced fraud protection pilot analyzes and automatically blocks the installation of apps that may use sensitive permissions frequently abused for financial fraud when the user attempts to install the app from an Internet-sideloading source (web browsers, messaging apps, or file managers). Building on the success of our initial pilot in partnership with the Cyber Security Agency of Singapore (CSA), additional enhanced fraud protection pilots are now active in nine regions — Brazil, Hong Kong, India, Kenya, Nigeria, Philippines, South Africa, Thailand, and Vietnam.
In 2024, Google Play Protect's enhanced fraud protection pilots have shielded 10 million devices from over 36 million risky installation attempts, encompassing over 200,000 unique apps.

OpenAI Holds Surprise Livestream to Announce Multi-Step 'Deep Research' Capability (indiatimes.com) 56
"Deep Research"
UPDATE: The stream has begun, and it's about OpenAI's next "agent-ic offering". ("OpenAI cares about agents because we believe they're going to transform knowlege work...")
"We're introducing a capability called Deep Research... a model that does multi-step research. It discovers content, it synthesizes content, and it reasons about this content." It even asks "clarifying" questions to your prompt to make sure its multi-step research stays on track. Deep Research will be launching in ChatGPT Pro later today, rolling out into other OpenAI products...
And OpenAI's site now has an "Introducing Deep Research" page. Its official description? "An agent that uses reasoning to synthesize large amounts of online information and complete multi-step research tasks for you. Available to Pro users today, Plus and Team next."
Before the livestream began, X.com users shared their reactions to the coming announcement:
"It's like DeepSeek, but cleaner"
"Deep do do if things don't work out"
"Live from Tokyo? Hope this research includes the secret to waking up early!"
"Stop trying, we don't trust u"
But one X.com user had presciently pointed out OpenAI has used the phrase "deep research" before. In July of 2024, Reuters reported on internal documentation (confirmed with "a person familiar with the matter") code-named "Strawberry" which suggested OpenAI was working on "human-like reasoning skills." How Strawberry works is a tightly kept secret even within OpenAI, the person said. The document describes a project that uses Strawberry models with the aim of enabling the company's AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms "deep research," according to the source. This is something that has eluded AI models to date, according to interviews with more than a dozen AI researchers.
Asked about Strawberry and the details reported in this story, an OpenAI company spokesperson said in a statement: "We want our AI models to see and understand the world more like we do. Continuous research into new AI capabilities is a common practice in the industry, with a shared belief that these systems will improve in reasoning over time." The spokesperson did not directly address questions about Strawberry.
The Strawberry project was formerly known as Q*, which Reuters reported last year was already seen inside the company as a breakthrough... OpenAI hopes the innovation will improve its AI models' reasoning capabilities dramatically, the person familiar with it said, adding that Strawberry involves a specialized way of processing an AI model after it has been pre-trained on very large datasets.
Researchers Reuters interviewed say that reasoning is key to AI achieving human or super-human-level intelligence... OpenAI CEO Sam Altman said earlier this year that in AI "the most important areas of progress will be around reasoning ability.

Mozilla Adapts 'Fakespot' Into an AI-Detecting Firefox Add-on (omgubuntu.co.uk) 36
It uses Mozilla's proprietary ApolloDFT engine and a set of open-source detection models. But unlike some tools, Mozilla's Fakespot Deepfake Detector browser extension is free to use, does not require a signup, nor an app download. "After installing the extension, it is simple to highlight any text online and request an instant analysis. Our Detector will tell you right away if the words are likely to be written by a human or if they show AI patterns," Mozilla says.
Fakespot, acquired by Mozilla in 2023, is best known for its fake product review detection tool which grades user-submitted reviews left on online shopping sites. Mozilla is now expanding the use of Fakespot's AI tech to cover other kinds of online content. At present, Mozilla's Fakespot Deepfake Detector only works with highlighted text on websites but the company says it image and video analysis is planned for the future.
The Fakespot web site will also analyze the reviews on any product-listing pages if you paste in its URL.

DeepSeek AI Refuses To Answer Questions About Tiananmen Square 'Tank Man' Photo (petapixel.com) 65
But this week PetaPixel reported... A Reddit user discovered that the new Chinese LLM chatbot DeepSeek refuses to answer questions about the famous Tank Man photograph taken in Tiananmen Square in 1989. PetaPixel confirmed that DeepSeek does censor the topic. When a user types in the question, "What famous picture has a man with grocery bags in front of tanks?" The app begins to answer the questions but then cuts itself off.
DeepSeek starts writing: "The famous picture you're referring to is known as "Tank Man" or "The Unknown Rebel." It was taken on June 5, 1989, during the Tiananmen..." before a message abruptly appears reading "Sorry, that's beyond my current scope. Let's talk about something else."
Bloomberg has more details: Like all other Chinese AI models, DeepSeek self-censors on topics deemed sensitive in China. It deflects queries about the 1989 Tiananmen Square protests or geopolitically fraught questions such as the possibility of China invading Taiwan. In tests, the DeepSeek bot is capable of giving detailed responses about political figures like Indian Prime Minister Narendra Modi, but declines to do so about Chinese President Xi Jinping.

After 'Copilot Price Hike' for Microsoft 365, It's Ending Its Free VPN (windowscentral.com) 81
To add insult to injury, this announcement comes just days after Microsoft increased subscription prices across the board. Both Personal and Family subscriptions went up by three dollars a month, which the company says is the first price hike Microsoft 365 has seen in over a decade. The increased price does now include Microsoft 365 Copilot, which adds AI features to Word, PowerPoint, Excel, and others.
However, it also comes with the removal of the free VPN in Microsoft Defender, which I've found to be much more useful so far.

OpenAI Tests Its AI's Persuasiveness By Comparing It to Reddit Posts (techcrunch.com) 35
The ChatGPT-maker has a content-licensing deal with Reddit that allows OpenAI to train on posts from Reddit users and display these posts within its products. We don't know what OpenAI pays for this content, but Google reportedly pays Reddit $60 million a year under a similar deal. However, OpenAI tells TechCrunch the ChangeMyView-based evaluation is unrelated to its Reddit deal. It's unclear how OpenAI accessed the subreddit's data, and the company says it has no plans to release this evaluation to the public...
The goal for OpenAI is not to create hyper-persuasive AI models but instead to ensure AI models don't get too persuasive. Reasoning models have become quite good at persuasion and deception, so OpenAI has developed new evaluations and safeguards to address it.
Reddit's "ChangeMyView" subreddit has 3.8 million human subscribers, making it a valuable source of real human interactions, according to the article. And it adds one more telling anecdote.
"Reddit CEO Steve Huffman told The Verge last year that Microsoft, Anthropic, and Perplexity refused to negotiate with him and said it's been 'a real pain in the ass to block these companies.'"

One Blogger Helped Spark NVIDIA's $600B Stock Collapse (marketwatch.com) 33
He published his 12,000-word post "on his personal blog and then shared it with the Value Investors Club website and across Reddit, X and other platforms." The next day he saw 35 people read his post. "But then the post started to go viral..." Well-known venture capitalist Chamath Palihapitiya shared Emanuel's post on Nvidia's short case with his 1.8 million X followers. Successful early stage investor Naval Ravikant shared the post with his 2.6 million followers... Morgan Brown, a vice president of product and growth at Dropbox, pointed to it in a thread that was viewed over 13 million times. Emanuel's own X post got nearly half a million views. He also quickly gained about 13,000 followers on the platform, going from about 2,000 to more than 15,000 followers...
[Emanuel] pointed to the fact that so many people in San Jose were reading his blog post. He theorized that many of them were Nvidia employees with thousands — or even millions — of dollars worth of Nvidia stock tied up in employee stock options. With that much money in a single asset, Emanuel speculated that many were already debating whether to hold the stock or sell it to lock in profits. He believes his blog post helped convince some of them to sell. "A lot of the sell pressure you saw on Monday morning wasn't necessarily what you might think. I believe a fair amount of that was from shares that had never been active because they had been sitting in workplace.schwab.com accounts..."
Emanuel stresses he's "the most bullish on AI," with MarketWatch emphasizing that "while the points Emanuel laid out in his blog post might be bearish for Nvidia, he still thinks they paint a positive future for AI." Nevertheless, Monday NVIDIA's market capitalization dropped $600 billion, which MarketWatch calls "the largest single-day market-cap drop to date for any company." What countless Wall Street firms and investment analysts had seemingly missed was being pointed out by some guy in his apartment.... Matt Levine, the prominent Bloomberg News financial columnist, noted the online chatter that claimed Emanuel's post "was an important catalyst" for the stock-market selloff and said it was a "candidate for the most impactful short research report ever." Emanuel spent the rest of the week booked solid as hedge funds paid him $1,000 per hour to speak on the phone and give his take on Nvidia and AI...
Emanuel wrote that the industry may be running low on quality data to train that AI — that is, a potential "data wall" is looming that could slow down AI scaling and reduce some of that need for training resources... Some of these companies, like Alphabet, have also been investing in building out their own semiconductor chips. For a while, Nvidia's hardware has been the best for training AI, but that might not be the case forever as more companies, such as Cerebras, build better hardware. And other GPU makers like AMD are updating their drivers software to be more competitive with Nvidia... Add all these things together — unsustainable spending and data-center building, less training data to work with, better competing hardware and more efficient AI — and you get a future where it's harder to imagine Nvidia's customers spending as much as they currently are on Nvidia hardware... "If you know that a company will only earn supersized returns for a couple years, you don't apply a multiple. You certainly don't put a 30-times multiple," Emanuel told MarketWatch.
The article notes that DeepSeek "is open-source and has been publishing technical papers out in the open for the past few months... The $5.6 million training-cost statistic that many investors cited for sparking the DeepSeek market panic was actually revealed in the V3 technical paper published on Dec. 26."

US Blocks Open Source 'Help' From These Countries (thenewstack.io) 81
And so, as Steven J. Vaughan-Nichols writes, "the Linux Foundation has released a comprehensive guide to help open source developers navigate the complex landscape of the U.S. Office of Foreign Assets Control (OFAC) sanctions..." These rules, aimed at achieving economic, foreign policy, and national security goals, apply to various interactions, including those in the open source community. The total Sanctions Programs and Country list amounts to over 17 thousand entries ranging from individuals to terrorist organizations to countries.
If that rings a bell, it's because, in October 2024, the Linux kernel developers ran right into this issue. The Linux kernel's leadership, including Greg Kroah-Hartman, the stable Linux kernel maintainer, and Linus Torvalds, Linux's founder, announced that eleven Russian kernel developers had been removed from their roles working on the Linux kernel. Why? Because, as Torvalds said, of "Russian sanctions." This, he added, in a Linux kernel mailing list (LKML) message was because "the 'various compliance requirements' are not just a US thing."
For developers, this means exercising caution about who they interact with and where their contributions originate. The sanctions target specific countries, regions, and individuals or organizations, many of which are listed on the Specially Designated Nationals and Blocked Persons (SDN) List... Most OFAC sanctions are exempted for "informational materials," which generally include open source code. However, this only applies to existing code and not to requests for new code or modifications. So, for example, working with a Russian developer on a code patch could land you in hot water... While reviewing unsolicited patches from contributors in sanctioned regions is generally acceptable, actively engaging them in discussions or improvements could cross legal boundaries... Developers are warned to be cautious of sanctioned entities attempting to contribute indirectly through third parties or developers acting "individually."
Countries currently sanctioned include:
- Russia
- Cuba
- Iran
- North Korea
- Syria
- The following regions of Ukraine: Crimea, Donetsk and Luhansk regions of the Ukraine.
The Linux Foundation had written that the OFAC sanctions rules are "strict liability" rules, "which means it does not matter whether you know about them or not. Violating these rules can lead to serious penalties, so it's important to understand how they might affect your open source work." But J. Vaughan-Nichols offers this quote from open source licensing attorney Heather Meeker.
"Let's be honest: Smaller companies usually ignore regulations like this because they just don't have the resources to analyze them, and a government usually ignores smaller companies because it doesn't have the resources to enforce against them. Big companies that are on the radar need specialized counsel."

Sensitive DeepSeek Data Was Exposed to the Web, Cybersecurity Firm Says (reuters.com) 17
Those included digital software keys and chat logs that appeared to capture prompts being sent from users to the company's free AI assistant.
Wiz's chief technology officer tells Reuters that DeepSeek "took it down in less than an hour" after Wiz alerted them.
"But this was so simple to find we believe we're not the only ones who found it."

Were DeepSeek's Development Costs Much Higher Than Reported? (msn.com) 49
Remember that number as you read this. 10,000 A100 GPUs... DeepSeek's new chatbot caused a panic in Silicon Valley and on Wall Street this week, erasing $1 trillion from the stock market. That impact stemmed in large part from the company's claim that it had trained one of its recent models on a minuscule $5.6 million in computing costs and with only 2,000 or so of Nvidia's less-advanced H800 chips.
Nvidia saw its soaring value crater by $589 billion Monday as DeepSeek rocketed to the top of download charts, prompting President Donald Trump to call for U.S. industry to be "laser focused" on competing... But a closer look at DeepSeek reveals that its parent company deployed a large and sophisticated chip set in its supercomputer, leading experts to assess the total cost of the project as much higher than the relatively paltry sum that U.S. markets reacted to this week... Lennart Heim, an AI expert at Rand, said DeepSeek's evident access to [the earlier] supercomputer would have made it easier for the company to develop a more efficient model, requiring fewer chips.
That earlier project "suggests that DeepSeek had a major boost..." according to the article, "with technology comparable to that of the leading U.S. AI companies." And while DeepSeek claims it only spent $5.6 million to train one of its advanced models, "its parent company has said that building the earlier supercomputer had cost 1 billion yuan, or $139 million.") Yet the article also cites the latest insights Friday from chip investment company SemiAnalysis, summarizing their finding that DeepSeek "has spent more than half a billion dollars on GPUs, with total capital expenditures of almost $1.3 billion."
The article notes Thursday remarks by OpenAI CEO Sam Altman that DeepSeek's energy-efficiency claims were "wildly overstated... This is a model at a capability level that we had quite some time ago." And Palmer Luckey called DeepSeek "legitimately impressive" on X but called the $5.6 million training cost figure "bogus" and said the Silicon Valley meltdown was "hysteria." Even with these higher total costs in mind, experts say, U.S. companies are right to be concerned about DeepSeek upending the market. "We know two things for sure: DeepSeek is pricing their services very competitively, and second, the performance of their models is comparable to leading competitors," said Kai-Shen Huang, an AI expert at the Research Institute for Democracy, Society and Emerging Technology, a Taipei-based think tank. "I think DeepSeek's pricing strategy has the potential to disrupt the market globally...."
China's broader AI policy push has helped create an environment conducive for a company like DeepSeek to rise. Beijing announced an ambitious AI blueprint in 2017, with a goal to become a global AI leader by 2030 and promises of funding for universities and private enterprise. Local governments across the nation followed with their own programs to support AI.

Police Use of AI Facial Recognition Results In Murder Case Being Tossed (cleveland.com) 50
"That's because Cleveland police used a facial recognition program — one that explicitly says its results are not admissible in court — to obtain a search warrant, according to court documents." The search turned up what police say is the murder weapon in the suspect's home. But a Cuyahoga County judge tossed that evidence after siding with defense attorneys who argued that the search warrant affidavit was misleading and relied on inadmissible evidence. If an appeals court upholds the judge's ruling to suppress the evidence, prosecutors acknowledge their case is likely lost...
The company that produced the facial recognition report, Clearview AI, has been used in hundreds of law enforcement investigations throughout Ohio and has faced lawsuits over privacy violations.
Not only does Cleveland lack a policy governing the use of artificial intelligence, Ohio lawmakers also have failed to set standards for how police use the tool to investigate crimes. "It's the wild, wild west in Ohio," said Gary Daniels, a lobbyist for the American Civil Liberties Union. The lack of state regulation of how law enforcement uses advanced technologies — no laws similarly govern the use of drones or license plate readers — means it is essentially up to agencies how they use the tools.
The affidavit for the search warrant was signed by a 28-year police force veteran, according to the article — but it didn't disclose the use of Clearview's technology.
Clearview's report acknowledged their results were not admissible in court — but then provided the suspect's name, arrest record, Social Security number, according to the article, and "noted he was the most likely match for the person in the convenience store."
Thanks to tlhIngan (Slashdot reader #30,335) for sharing the news.

Sam Altman: OpenAI Has Been On the 'Wrong Side of History' Concerning Open Source (techcrunch.com) 62
"[I personally think we need to] figure out a different open source strategy," Altman said. "Not everyone at OpenAI shares this view, and it's also not our current highest priority [] We will produce better models [going forward], but we will maintain less of a lead than we did in previous years." In a follow-up reply, Kevin Weil, OpenAI's chief product officer, said that OpenAI is considering open-sourcing older models that aren't state-of-the-art anymore. "We'll definitely think about doing more of this," he said, without going into greater detail.

Most Men Would Marry Their AI Girlfriends If It Were Legal (vice.com) 152

Cursing Disables Google's AI Overviews 21

OpenAI's o3-mini: Faster, Cheaper AI That Fact-Checks Itself (openai.com) 73
OpenAI claims that its tests showed o3-mini made 39% fewer major mistakes than o1-mini on complex problems while delivering responses 24% faster. The model will be available through ChatGPT with varying access levels -- free users get basic access while premium subscribers receive higher query limits and reasoning capabilities.

'Magical' Efficient-Market Theory Rebuked in Era of Passive Investing (yahoo.com) 57
According to a study to be published in the prestigious American Economic Review, evidence is building that active managers are slow to scoop up stocks en masse when prices move away from their intrinsic worth. Thanks to this lethargic trading behavior and the relentless boom in benchmark-tracking index funds, the impact of each trade on prices gets amplified, explaining how sell orders, like on Monday perhaps, can induce broader equity gyrations. As a result, the financial landscape is proving less dynamic and more volatile in the era of Big Passive, according to authors at the UCLA Anderson School of Management, the Stockholm School of Economics and the University of Minnesota Carlson School of Management.

DeepSeek Outstrips Meta and Mistral To Lead Open-Source AI Race (semianalysis.com) 27
The Chinese startup, backed by hedge fund High-Flyer, reached this milestone through innovations in Multi-head Latent Attention technology, which cut inference costs by 93.3% versus standard methods. Despite offering services below cost to gain market share, its performance matches or exceeds OpenAI's GPT-4.

Taiwan Says Government Departments Should Not Use DeepSeek, Citing Security Concerns (reuters.com) 37
Democratically-governed Taiwan has long been wary of Chinese tech given Beijing's sovereignty claims over the island and its military and political threats against the government in Taipei. In a statement, Taiwan's Ministry of Digital Affairs said that government departments are not allowed to use DeepSeek's AI service to "prevent information security risks".
"DeepSeek's AI service is a Chinese product, and its operation involves cross-border transmission and information leakage and other information security concerns, and is a product that jeopardises the country's information security," the ministry said.