Programming

Apple Migrates Its Password Monitoring Service to Swift from Java, Gains 40% Performance Uplift (infoq.com) 106

Meta and AWS have used Rust, and Netflix uses Go,reports the programming news site InfoQ. But using another language, Apple recently "migrated its global Password Monitoring service from Java to Swift, achieving a 40% increase in throughput, and significantly reducing memory usage."

This freed up nearly 50% of their previously allocated Kubernetes capacity, according to the article, and even "improved startup time, and simplified concurrency." In a recent post, Apple engineers detailed how the rewrite helped the service scale to billions of requests per day while improving responsiveness and maintainability... "Swift allowed us to write smaller, less verbose, and more expressive codebases (close to 85% reduction in lines of code) that are highly readable while prioritizing safety and efficiency."

Apple's Password Monitoring service, part of the broader Password app's ecosystem, is responsible for securely checking whether a user's saved credentials have appeared in known data breaches, without revealing any private information to Apple. It handles billions of requests daily, performing cryptographic comparisons using privacy-preserving protocols. This workload demands high computational throughput, tight latency bounds, and elastic scaling across regions... Apple's previous Java implementation struggled to meet the service's growing performance and scalability needs. Garbage collection caused unpredictable pause times under load, degrading latency consistency. Startup overhead — from JVM initialization, class loading, and just-in-time compilation, slowed the system's ability to scale in real time. Additionally, the service's memory footprint, often reaching tens of gigabytes per instance, reduced infrastructure efficiency and raised operational costs.

Originally developed as a client-side language for Apple platforms, Swift has since expanded into server-side use cases.... Swift's deterministic memory management, based on reference counting rather than garbage collection (GC), eliminated latency spikes caused by GC pauses. This consistency proved critical for a low-latency system at scale. After tuning, Apple reported sub-millisecond 99.9th percentile latencies and a dramatic drop in memory usage: Swift instances consumed hundreds of megabytes, compared to tens of gigabytes with Java.

"While this isn't a sign that Java and similar languages are in decline," concludes InfoQ's article, "there is growing evidence that at the uppermost end of performance requirements, some are finding that general-purpose runtimes no longer suffice."
Python

Python Creator Guido van Rossum Asks: Is 'Worse is Better' Still True for Programming Languages? (blogspot.com) 67

In 1989 a computer scientist argued that more functionality in software actually lowers usability and practicality — leading to the counterintuitive proposition that "worse is better". But is that still true?

Python's original creator Guido van Rossum addressed the question last month in a lightning talk at the annual Python Language Summit 2025. Guido started by recounting earlier periods of Python development from 35 years ago, where he used UNIX "almost exclusively" and thus "Python was greatly influenced by UNIX's 'worse is better' philosophy"... "The fact that [Python] wasn't perfect encouraged many people to start contributing. All of the code was straightforward, there were no thoughts of optimization... These early contributors also now had a stake in the language; [Python] was also their baby"...

Guido contrasted early development to how Python is developed now: "features that take years to produce from teams of software developers paid by big tech companies. The static type system requires an academic-level understanding of esoteric type system features." And this isn't just Python the language, "third-party projects like numpy are maintained by folks who are paid full-time to do so.... Now we have a huge community, but very few people, relatively speaking, are contributing meaningfully."

Guido asked whether the expectation for Python contributors going forward would be that "you had to write a perfect PEP or create a perfect prototype that can be turned into production-ready code?" Guido pined for the "old days" where feature development could skip performance or feature-completion to get something into the hands of the community to "start kicking the tires". "Do we have to abandon 'worse is better' as a philosophy and try to make everything as perfect as possible?" Guido thought doing so "would be a shame", but that he "wasn't sure how to change it", acknowledging that core developers wouldn't want to create features and then break users with future releases.

Guido referenced David Hewitt's PyO3 talk about Rust and Python, and that development "was using worse is better," where there is a core feature set that works, and plenty of work to be done and open questions. "That sounds a lot more fun than working on core CPython", Guido paused, "...not that I'd ever personally learn Rust. Maybe I should give it a try after," which garnered laughter from core developers.

"Maybe we should do more of that: allowing contributors in the community to have a stake and care".

Java

UK Universities Sign $13.3 Million Deal To Avoid Oracle Java Back Fees (theregister.com) 30

An anonymous reader quotes a report from The Register: UK universities and colleges have signed a framework worth up to 9.86 million pounds ($13.33 million) with Oracle to use its controversial Java SE Universal Subscription model, in exchange for a "waiver of historic fees due for any institutions who have used Oracle Java since 2023." Jisc, a membership organization that runs procurement for higher and further education establishments in the UK, said it had signed an agreement to purchase the new subscription licenses after consultation with members. In a procurement notice, it said institutions that use Oracle Java SE are required to purchase subscriptions. "The agreement includes the waiver of historic fees due for any institutions who have used Oracle Java since 2023," the notice said.

The Java SE Universal Subscription was introduced in January 2023 to an outcry from licensing experts and analysts. It moved licensing of Java from a per-user basis to a per-employee basis. At the time, Oracle said it was "a simple, low-cost monthly subscription that includes Java SE Licensing and Support for use on Desktops, Servers or Cloud deployments." However, licensing advisors said early calculations to help some clients showed that the revamp might increase costs by up to ten times. Later, analysis from Gartner found the per-employee subscription model to be two to five times more expensive than the legacy model.

"For large organizations, we expect the increase to be two to five times, depending on the number of employees an organization has," Nitish Tyagi, principal Gartner analyst, said in July 2024. "Please remember, Oracle defines employees as part-time, full-time, temporary, agents, contractors, as in whosoever supports internal business operations has to be licensed as per the new Java Universal SE Subscription model." Since the introduction of the new Oracle Java licensing model, user organizations have been strongly advised to move off Oracle Java and find open source alternatives for their software development and runtime environments. A survey of Oracle users found that only one in ten was likely to continue to stay with Oracle Java, in part as a result of the licensing changes.

Python

New Code.org Curriculum Aims To Make Schoolkids Python-Literate and AI-Ready 48

Longtime Slashdot reader theodp writes: The old Code.org curriculum page for middle and high school students has been changed to include a new Python Lab in the tech-backed nonprofit's K-12 offerings. Elsewhere on the site, a Computer Science and AI Foundations curriculum is described that includes units on 'Foundations of AI Programming [in Python]' and 'Insights from Data and AI [aka Data Science].' A more-detailed AI Foundations Syllabus 25-26 document promises a second semester of material is coming soon: "This semester offers an innovative approach to teaching programming by integrating learning with and about artificial intelligence (AI). Using Python as the primary language, students build foundational programming skills while leveraging AI tools to enhance computational thinking and problem-solving. The curriculum also introduces students to the basics of creating AI-powered programs, exploring machine learning, and applying data science principles."

Newly-posted videos on Code.org's YouTube channel appear to be intended to support the new Python-based CS & AI course. "Python is extremely versatile," explains a Walmart data scientist to open the video for Data Science: Using Python. "So, first of all, Python is one of the very few languages that can handle numbers very, very well." A researcher at the Univ. of Washington's Institute for Health Metrics and Evaluation (IHME) adds, "Python is the gold standard and what people expect data scientists to know [...] Key to us being able to handle really big data sets is our use of Python and cluster computing." Adding to the Python love, an IHME data analyst explains, "Python is a great choice for large databases because there's a lot of support for Python libraries."

Code.org is currently recruiting teachers to attend its CS and AI Foundations Professional Learning program this summer, which is being taught by Code.org's national network of university and nonprofit regional partners (teachers who signup have a chance to win $250 in DonorsChoose credits for their classrooms). A flyer for a five-day Michigan Professional Development program to prepare teachers for a pilot of the Code.org CS & A course touts the new curriculum as "an alternative to the AP [Computer Science] pathway" (teachers are offered scholarships covering registration, lodging, meals, and workshop materials).

Interestingly, Code.org's embrace of Python and Data Science comes as the nonprofit changes its mission to 'make CS and AI a core part of K-12 education' and launches a new national campaign with tech leaders to make CS and AI a graduation requirement. Prior to AI changing the education conversation, Code.org in 2021 boasted that it had lined up a consortium of tech giants, politicians, and educators to push its new $15 million Amazon-bankrolled Java AP CS A curriculum into K-12 classrooms. Just three years later, however, Amazon CEO Andy Jassy was boasting to investors that Amazon had turned to AI to automatically do Java coding that he claimed would have otherwise taken human coders 4,500 developer-years to complete.
Programming

Bill Atkinson, Hypercard Creator and Original Mac Team Member, Dies at Age 74 (appleinsider.com) 53

AppleInsider reports: The engineer behind much of the Mac's early graphical user interfaces, QuickDraw, MacPaint, Hypercard and much more, William D. "Bill" Atkinson, died on June 5 of complications from pancreatic cancer...

Atkinson, who built a post-Apple career as a noted nature photographer, worked at Apple from 1978 to 1990. Among his lasting contributions to Apple's computers were the invention of the menubar, the selection lasso, the "marching ants" item selection animation, and the discovery of a midpoint circle algorithm that enabled the rapid drawing of circles on-screen.

He was Apple Employee No. 51, recruited by Steve Jobs. Atkinson was one of the 30 team members to develop the first Macintosh, but also was principle designer of the Lisa's graphical user interface (GUI), a novelty in computers at the time. He was fascinated by the concept of dithering, by which computers using dots could create nearly photographic images similar to the way newspapers printed photos. He is also credited (alongside Jobs) for the invention of RoundRects, the rounded rectangles still used in Apple's system messages, application windows, and other graphical elements on Apple products.

Hypercard was Atkinson's main claim to fame. He built the a hypermedia approach to building applications that he once described as a "software erector set." The Hypercard technology debuted in 1987, and greatly opened up Macintosh software development.

In 2012 some video clips of Atkinson appeared in some rediscovered archival footage. (Original Macintosh team developer Andy Hertzfeld uploaded "snippets from interviews with members of the original Macintosh design team, recorded in October 1983 for projected TV commercials that were never used.")

Blogger John Gruber calls Atkinson "One of the great heroes in not just Apple history, but computer history." If you want to cheer yourself up, go to Andy Hertzfeld's Folklore.org site and (re-)read all the entries about Atkinson. Here's just one, with Steve Jobs inspiring Atkinson to invent the roundrect. Here's another (surely near and dear to my friend Brent Simmons's heart) with this kicker of a closing line: "I'm not sure how the managers reacted to that, but I do know that after a couple more weeks, they stopped asking Bill to fill out the form, and he gladly complied."

Some of his code and algorithms are among the most efficient and elegant ever devised. The original Macintosh team was chock full of geniuses, but Atkinson might have been the most essential to making the impossible possible under the extraordinary technical limitations of that hardware... In addition to his low-level contributions like QuickDraw, Atkinson was also the creator of MacPaint (which to this day stands as the model for bitmap image editorsâ — âPhotoshop, I would argue, was conceptually derived directly from MacPaint) and HyperCard ("inspired by a mind-expanding LSD journey in 1985"), the influence of which cannot be overstated.

I say this with no hyperbole: Bill Atkinson may well have been the best computer programmer who ever lived. Without question, he's on the short list. What a man, what a mind, what gifts to the world he left us.

Programming

Ask Slashdot: How Important Is It For Programmers to Learn Touch Typing? 189

Once upon a time, long-time Slashdot reader tgibson learned how to type on a manual typewriter, back in an 8th grade classroom.

And to this day, they write, "my bias is to nod approvingly at touch typists and roll my eyes at those who need to stare at the keyboard while typing..." But how true is that for computer professionals today? After 15 years I left industry and became a post-secondary computer science educator. Occasionally I rant to my students about the importance of touch-typing as a skill to have as a software engineer.

But I've been out of the game for some time now. Those of you hiring or working with freshly-minted software engineers, what's your take?

One anonymous Slashdot reader responded: Oh, you mean the kid in the next cubicle that has said "Hey Siri" 297 times this morning? I'll let you know when he starts typing. A minor suggestion to office managers... please purchase a very quiet keyboard. Fellow cube-mates who are accomplished typists would consider that struggling audibly to be akin to nails on a blackboard...
Share your own thoughts in the comments.

How important is it for programmers to learn touch typing?
Programming

'For Algorithms, a Little Memory Outweighs a Lot of Time' (quantamagazine.org) 37

MIT comp-sci professor Ryan Williams suspected that a small amount of memory "would be as helpful as a lot of time in all conceivable computations..." writes Quanta magazine.

"In February, he finally posted his proof online, to widespread acclaim..." Every algorithm takes some time to run, and requires some space to store data while it's running. Until now, the only known algorithms for accomplishing certain tasks required an amount of space roughly proportional to their runtime, and researchers had long assumed there's no way to do better. Williams' proof established a mathematical procedure for transforming any algorithm — no matter what it does — into a form that uses much less space.

What's more, this result — a statement about what you can compute given a certain amount of space — also implies a second result, about what you cannot compute in a certain amount of time. This second result isn't surprising in itself: Researchers expected it to be true, but they had no idea how to prove it. Williams' solution, based on his sweeping first result, feels almost cartoonishly excessive, akin to proving a suspected murderer guilty by establishing an ironclad alibi for everyone else on the planet. It could also offer a new way to attack one of the oldest open problems in computer science.

"It's a pretty stunning result, and a massive advance," said Paul Beame, a computer scientist at the University of Washington.

Thanks to long-time Slashdot reader mspohr for sharing the article.
IOS

What To Expect From Apple's WWDC (arstechnica.com) 26

Apple's Worldwide Developers Conference 25 (WWDC) kicks off next week, June 9th, showcasing the company's latest software and new technologies. That includes the next version of iOS, which is rumored to have the most significant design overhaul since the introduction of iOS 7. Here's an overview of what to expect: Major Software Redesigns
Apple plans to shift its operating system naming to reflect the release year, moving from sequential numbers to year-based identifiers. Consequently, the upcoming releases will be labeled as iOS 26, macOS 26, watchOS 26, etc., streamlining the versioning across platforms.

iOS 26 is anticipated to feature a glossy, glass-like interface inspired by visionOS, incorporating translucent elements and rounded buttons. This design language is expected to extend across iPadOS, macOS, watchOS, and tvOS, promoting a cohesive user experience across devices. Core applications like Phone, Safari, and Camera are slated for significant redesigns, too. For instance, Safari may introduce a translucent, "glassy" address bar, aligning with the new visual aesthetics.

While AI is not expected to be the main focus due to Siri's current readiness, some AI-related updates are rumored. The Shortcuts app may gain "Apple Intelligence," enabling users to create shortcuts using natural language. It's also possible that Gemini will be offered as an option for AI functionalities on the iPhone, similar to ChatGPT.

Other App and Feature Updates
The lock screen might display charging estimates, indicating how long it will take for the phone to fully charge. There's a rumor about bringing live translation features to AirPods. The Messages app could receive automatic translations and call support; the Music app might introduce full-screen animated lock screen art; and Apple Notes may get markdown support. Users may also only need to log into a captive Wi-Fi portal once, and all their devices will automatically be logged in.

Significant updates are expected for Apple Home. There's speculation about the potential announcement of a "HomePad" with a screen, Apple's competitor to devices like the Nest Hub Mini. A new dedicated Apple gaming app is also anticipated to replace Game Center.
If you're expecting new hardware, don't hold your breath. The event is expected to focus primarily on software developments. It may even see discontinued support for several older Intel-based Macs in macOS 26, including models like the 2018 MacBook Pro and the 2019 iMac, as Apple continues its transition towards exclusive support for Apple Silicon devices.

Sources:
Apple WWDC 2025 Rumors and Predictions! (Waveform)
WWDC 2025 Overview (MacRumors)
WWDC 2025: What to expect from this year's conference (TechCrunch)
What to expect from Apple's Worldwide Developers Conference next week (Ars Technica)
Apple's WWDC 2025: How to Watch and What to Expect (Wired)
Programming

Andrew Ng Says Vibe Coding is a Bad Name For a Very Real and Exhausting Job (businessinsider.com) 79

An anonymous reader shares a report: Vibe coding might sound chill, but Andrew Ng thinks the name is unfortunate. The Stanford professor and former Google Brain scientist said the term misleads people into imagining engineers just "go with the vibes" when using AI tools to write code. "It's unfortunate that that's called vibe coding," Ng said at a firechat chat in May at conference LangChain Interrupt. "It's misleading a lot of people into thinking, just go with the vibes, you know -- accept this, reject that."

In reality, coding with AI is "a deeply intellectual exercise," he said. "When I'm coding for a day with AI coding assistance, I'm frankly exhausted by the end of the day." Despite his gripe with the name, Ng is bullish on AI-assisted coding. He said it's "fantastic" that developers can now write software faster with these tools, sometimes while "barely looking at the code."

Education

Code.org Changes Mission To 'Make CS and AI a Core Part of K-12 Education' 40

theodp writes: Way back in 2010, Microsoft and Google teamed with nonprofit partners to launch Computing in the Core, an advocacy coalition whose mission was "to strengthen computing education and ensure that it is a core subject for students in the 21st century." In 2013, Computing in the Core was merged into Code.org, a new tech-backed-and-directed nonprofit. And in 2015, Code.org declared 'Mission Accomplished' with the passage of the Every Student Succeeds Act, which elevated computer science to a core academic subject for grades K-12.

Fast forward to June 2025 and Code.org has changed its About page to reflect a new AI mission that's near-and-dear to the hearts of Code.org's tech giant donors and tech leader Board members: "Code.org is a nonprofit working to make computer science (CS) and artificial intelligence (AI) a core part of K-12 education for every student." The mission change comes as tech companies are looking to chop headcount amid the AI boom and just weeks after tech CEOs and leaders launched a new Code.org-orchestrated national campaign to make CS and AI a graduation requirement.
Programming

Morgan Stanley Says Its AI Tool Processed 9 Million Lines of Legacy Code This Year And Saved 280,000 Developer Hours (msn.com) 88

Morgan Stanley has deployed an in-house AI tool called DevGen.AI that has reviewed nine million lines of legacy code this year, saving the investment bank's developers an estimated 280,000 hours by translating outdated programming languages into plain English specifications that can be rewritten in modern code.

The tool, built on OpenAI's GPT models and launched in January, addresses what Mike Pizzi, the company's global head of technology and operations, calls one of enterprise software's biggest pain points -- modernizing decades-old code that weakens security and slows new technology adoption. While commercial AI coding tools excel at writing new code, they lack expertise in older or company-specific programming languages like Cobol, prompting Morgan Stanley to train its own system on its proprietary codebase.

The tool's primary strength, the bank said, lies in creating English specifications that map what legacy code does, enabling any of the company's 15,000 developers worldwide to rewrite it in modern programming languages rather than relying on a dwindling pool of specialists familiar with antiquated coding systems.
Programming

AI Startups Revolutionize Coding Industry, Leading To Sky-High Valuations 39

Code generation startups are attracting extraordinary investor interest two years after ChatGPT's launch, with companies like Cursor raising $900 million at a $10 billion valuation despite operating with negative gross margins. OpenAI is reportedly in talks to acquire Windsurf, maker of the Codeium coding tool, for $3 billion, while the startup generates $50 million in annualized revenue from a product launched just seven months ago.

These "vibe coding" platforms allow users to write software using plain English commands, attempting to fundamentally change how code gets written. Cursor went from zero to $100 million in recurring revenue in under two years with just 60 employees, though both major startups spend more money than they generate, Reuters reports, citing investor sources familiar with their operations.

The surge comes as major technology giants report significant portions of their code now being AI-generated -- Google claims over 30% while Microsoft reports 20-30%. Meanwhile, entry-level programming positions have declined 24% as companies increasingly rely on AI tools to handle basic coding tasks previously assigned to junior developers.
Programming

How Stack Overflow's Reputation System Led To Its Own Downfall (infoworld.com) 103

A new analysis argues that Stack Overflow's decline began years before AI tools delivered the "final blow" to the once-dominant programming forum. The site's monthly questions dropped from a peak of 200,000 to a steep collapse that began in earnest after ChatGPT's 2023 launch, but usage had been declining since 2014, according to data cited in the InfoWorld analysis.

The platform's remarkable reputation system initially elevated it above competitors by allowing users to earn points and badges for helpful contributions, but that same system eventually became its downfall, the piece argues. As Stack Overflow evolved into a self-governing platform where high-reputation users gained moderation powers, the community transformed from a welcoming space for developer interaction into what the author compares to a "Stanford Prison Experiment" where moderators systematically culled interactions they deemed irrelevant.
Programming

Amid Turmoil, Stack Overflow Asks About AI, Salary, Remote Work in 15th Annual Developer Survey (stackoverflow.blog) 10

Stack Overflow remains in the midst of big changes to counter an AI-fueled drop in engagement. So "We're wondering what kind of online communities Stack Overflow users continue to support in the age of AI," writes their senior analyst, "and whether AI is becoming a closer companion than ever before."

For their 15th year of their annual reader survey, this means "we're not just collecting data; we're reflecting on the last year of questions, answers, hallucinations, job changes, tech stacks, memory allocations, models, systems and agents — together..." Is it an AI agent revolution yet? Are you building or utilizing AI agents? We want to know how these intelligent assistants are changing your daily workflow and if developers are really using them as much as these keynote speeches assume. We're asking if you are using these tools and where humans are still needed for common developer tasks.

Career shifts: We're keen to understand if you've considered a career change or transitioned roles and if AI is impacting your approach to learning or using existing tools. Did we make up the difference in salaries globally for tech workers...?

They're also re-visiting "a key finding from recent surveys highlighted a significant statistic: 80% of developers reported being unhappy or complacent in their jobs." This raised questions about changing office (and return-to-office) culture and the pressures of the industry, along with whether there were any insights into what could help developers feel more satisfied at work. Prior research confirmed that flexibility at work used to contribute more than salary to job satisfaction, but 2024's results show us that remote work is not more impactful than salary when it comes to overall satisfaction... [For some positions job satisfaction stayed consistent regardless of salary, though it increased with salary for other positions. And embedded developers said their happiness increased when they worked with top-quality hardware, while desktop developers cited "contributing to open source" and engineering managers were happier when "driving strategy".]

In 2024, our data showed that many developers experienced a pay cut in various roles and programming specialties. In an industry often seen as highly lucrative, this was a notable shift of around 7% lower salaries across the top ten reporting countries for the same roles. This year, we're interested in whether this trend has continued, reversed, or stabilized. Salary dynamics is an indicator for job satisfaction in recent surveys of Stack Overflow users and understanding trends for these roles can perhaps improve the process for finding the most useful factors contributing to role satisfaction outside of salary.

And of course they're asking about AI — while noting last year's survey uncovered this paradox. "While AI usage is growing (70% in 2023 vs. 76% in 2024 planning to or currently using AI tools), developer sentiment isn't necessarily following suit, as 77% in of all respondents in 2023 are favorable or very favorable of AI tools for development compared to 72% of all respondents in 2024." Concerns about accuracy and misinformation were prevalent among some key groups. More developers learning to code are using or are interested in using AI tools than professional developers (84% vs. 77%)... Developers with 10 — 19 years experience were most likely (84%) to name "increase in productivity" as a benefit of AI tools, higher than developers with less experience (<80%)...

Is it an AI agent revolution yet? Are you building or utilizing AI agents? We want to know how these intelligent assistants are changing your daily workflow and if developers are really using them as much as these keynote speeches assume. We're asking if you are using these tools and where humans are still needed for common developer tasks.

AI

Is the AI Job Apocalypse Already Here for Some Recent Grads? (msn.com) 117

"This month, millions of young people will graduate from college," reports the New York Times, "and look for work in industries that have little use for their skills, view them as expensive and expendable, and are rapidly phasing out their jobs in favor of artificial intelligence." That is the troubling conclusion of my conversations over the past several months with economists, corporate executives and young job seekers, many of whom pointed to an emerging crisis for entry-level workers that appears to be fueled, at least in part, by rapid advances in AI capabilities.

You can see hints of this in the economic data. Unemployment for recent college graduates has jumped to an unusually high 5.8% in recent months, and the Federal Reserve Bank of New York recently warned that the employment situation for these workers had "deteriorated noticeably." Oxford Economics, a research firm that studies labor markets, found that unemployment for recent graduates was heavily concentrated in technical fields like finance and computer science, where AI has made faster gains. "There are signs that entry-level positions are being displaced by artificial intelligence at higher rates," the firm wrote in a recent report.

But I'm convinced that what's showing up in the economic data is only the tip of the iceberg. In interview after interview, I'm hearing that firms are making rapid progress toward automating entry-level work and that AI companies are racing to build "virtual workers" that can replace junior employees at a fraction of the cost. Corporate attitudes toward automation are changing, too — some firms have encouraged managers to become "AI-first," testing whether a given task can be done by AI before hiring a human to do it. One tech executive recently told me his company had stopped hiring anything below an L5 software engineer — a midlevel title typically given to programmers with three to seven years of experience — because lower-level tasks could now be done by AI coding tools. Another told me that his startup now employed a single data scientist to do the kinds of tasks that required a team of 75 people at his previous company...

"This is something I'm hearing about left and right," said Molly Kinder, a fellow at the Brookings Institution, a public policy think tank, who studies the impact of AI on workers. "Employers are saying, 'These tools are so good that I no longer need marketing analysts, finance analysts and research assistants.'" Using AI to automate white-collar jobs has been a dream among executives for years. (I heard them fantasizing about it in Davos back in 2019.) But until recently, the technology simply wasn't good enough...

Slashdot Top Deals