

Google Calls for Measurable Memory-Safety Standards for Software (googleblog.com) 42
Memory safety bugs are "eroding trust in technology and costing billions," argues a new post on Google's security blog — adding that "traditional approaches, like code auditing, fuzzing, and exploit mitigations — while helpful — haven't been enough to stem the tide."
So the blog post calls for a "common framework" for "defining specific, measurable criteria for achieving different levels of memory safety assurance." The hope is this gives policy makers "the technical foundation to craft effective policy initiatives and incentives promoting memory safety" leading to "a market in which vendors are incentivized to invest in memory safety." ("Customers will be empowered to recognize, demand, and reward safety.")
In January the same Google security researchers helped co-write an article noting there are now strong memory-safety "research technologies" that are sufficiently mature: memory-safe languages (including "safer language subsets like Safe Buffers for C++"), mathematically rigorous formal verification, software compartmentalization, and hardware and software protections. (With hardware protections including things like ARM's Memory Tagging Extension and the (Capability Hardware Enhanced RISC Instructions, or "CHERI", architecture.) Google's security researchers are now calling for "a blueprint for a memory-safe future" — though Importantly, the idea is "defining the desired outcomes rather than locking ourselves into specific technologies."
Their blog post this week again urges a practical/actionable framework that's commonly understood, but one that supports different approaches (and allowing tailoring to specific needs) while enabling objective assessment: At Google, we're not just advocating for standardization and a memory-safe future, we're actively working to build it. We are collaborating with industry and academic partners to develop potential standards, and our joint authorship of the recent CACM call-to-action marks an important first step in this process... This commitment is also reflected in our internal efforts. We are prioritizing memory-safe languages, and have already seen significant reductions in vulnerabilities by adopting languages like Rust in combination with existing, wide-spread usage of Java, Kotlin, and Go where performance constraints permit. We recognize that a complete transition to those languages will take time. That's why we're also investing in techniques to improve the safety of our existing C++ codebase by design, such as deploying hardened libc++.
This effort isn't about picking winners or dictating solutions. It's about creating a level playing field, empowering informed decision-making, and driving a virtuous cycle of security improvement... The journey towards memory safety requires a collective commitment to standardization. We need to build a future where memory safety is not an afterthought but a foundational principle, a future where the next generation inherits a digital world that is secure by design.
The security researchers' post calls for "a collective commitment" to eliminate memory-safety bugs, "anchored on secure-by-design practices..." One of the blog post's subheadings? "Let's build a memory-safe future together."
And they're urging changes "not just for ourselves but for the generations that follow."
So the blog post calls for a "common framework" for "defining specific, measurable criteria for achieving different levels of memory safety assurance." The hope is this gives policy makers "the technical foundation to craft effective policy initiatives and incentives promoting memory safety" leading to "a market in which vendors are incentivized to invest in memory safety." ("Customers will be empowered to recognize, demand, and reward safety.")
In January the same Google security researchers helped co-write an article noting there are now strong memory-safety "research technologies" that are sufficiently mature: memory-safe languages (including "safer language subsets like Safe Buffers for C++"), mathematically rigorous formal verification, software compartmentalization, and hardware and software protections. (With hardware protections including things like ARM's Memory Tagging Extension and the (Capability Hardware Enhanced RISC Instructions, or "CHERI", architecture.) Google's security researchers are now calling for "a blueprint for a memory-safe future" — though Importantly, the idea is "defining the desired outcomes rather than locking ourselves into specific technologies."
Their blog post this week again urges a practical/actionable framework that's commonly understood, but one that supports different approaches (and allowing tailoring to specific needs) while enabling objective assessment: At Google, we're not just advocating for standardization and a memory-safe future, we're actively working to build it. We are collaborating with industry and academic partners to develop potential standards, and our joint authorship of the recent CACM call-to-action marks an important first step in this process... This commitment is also reflected in our internal efforts. We are prioritizing memory-safe languages, and have already seen significant reductions in vulnerabilities by adopting languages like Rust in combination with existing, wide-spread usage of Java, Kotlin, and Go where performance constraints permit. We recognize that a complete transition to those languages will take time. That's why we're also investing in techniques to improve the safety of our existing C++ codebase by design, such as deploying hardened libc++.
This effort isn't about picking winners or dictating solutions. It's about creating a level playing field, empowering informed decision-making, and driving a virtuous cycle of security improvement... The journey towards memory safety requires a collective commitment to standardization. We need to build a future where memory safety is not an afterthought but a foundational principle, a future where the next generation inherits a digital world that is secure by design.
The security researchers' post calls for "a collective commitment" to eliminate memory-safety bugs, "anchored on secure-by-design practices..." One of the blog post's subheadings? "Let's build a memory-safe future together."
And they're urging changes "not just for ourselves but for the generations that follow."
Great (Score:3)
Memory safety bugs are "eroding trust in technology and costing billions,"
I approve their efforts to measure things objectively. However, memory bugs are not eroding trust in technology. The vast majority of people don't even know bugs exist. I would like to see their objective attempt to measure trust in technology.
Oh, people know bugs exist (Score:2)
They get reminded monthly when Windows tells them to "update and restart."
As to the difference between a memory safety bug and any other bug that could drain their bank account behind their back of they don't OBEY AND UPDATE AND RESTART NOW, they probably neither know or care.
Re: (Score:2)
Obviously not secure enough about it that you feel the need to tell everybody.
Count me in (Score:3)
As a C++ developer with a large vast codebase, count me in.
Id really prefer if they worked on a nice valgrind replacement that could use multiple cores during heavy tracing too.
Re: Count me in (Score:2)
Comprehensive Rust (Score:2)
I don't understand why a company like Google doesn't just make their own Library of memory safe constructs and tell everyone to use that.
You may have missed that Google did just that. Google employees wrote "Comprehensive Rust" [github.io], a tutorial for a memory-safe language.
Re: Comprehensive Rust (Score:2)
Re: (Score:2)
Can't really be done in C++.
Let's suppose, for example, that you wrote a "memory safe" vector. If you pass any of the vector elements by reference to a function, especially in a concurrent context, how is the function supposed to know that the pointer is still valid at any given moment?
It just can't. Why? Among other things, the language lacks features like lifetime annotations, and adding a borrow checker would totally negate the only remaining argument for continuing to use C++ to begin with. To even come
Re: Comprehensive Rust (Score:3)
Re: (Score:2)
Of course it does, and not only did I allude to that, but I even named a few. For example, in C++ what you call shared_ptr is just an atomic reference counter (which, by the way, Rust has both a regular reference counter and an atomic reference counter, namely because you don't always need atomics, which not only are not present in every platform, but carry their own overhead.) What you call weak_ptr makes use of locking behavior, among other things.
https://stackoverflow.com/a/61... [stackoverflow.com]
Now you might be shouting
Re: Comprehensive Rust (Score:2)
Re: Comprehensive Rust (Score:2)
The only other solution is to add hardware based memory destruction. Which will help address the problem of high cost c++ smart pointers, but it's not a silver bullet, doesn't actually fix the problem of bad code, is not a truly zero cost abstraction, requires you to throw out existing hardware, which is an even worse idea than throwing out existing code.
In fact, WG21 has already put a lot more thought and effort into this than you could ever possibly hope to get from slashdot, and they can't even come up w
Re: (Score:2)
Oh and more on unique_ptr:
https://youtu.be/rHIkrotSwcc&t... [youtu.be]
That's not a plan, that's a goal (Score:2)
"That's not a plan, that's a goal"
Good idea...but... (Score:2)
Standards are good
Improving technology is good
Memory safety is good
But, when the discussion turns to "the technical foundation to craft effective policy initiatives and incentives promoting memory safety", I get skeptical
If bureaucrats craft complex rules that are so difficult and expensive to follow that only giant corporations can deal with them, it will kill small projects, either commercial or open source
A large part of the success of digital technology is the ability of small developers to freely inven
Re: (Score:2)
That is just bullshit (Score:2, Troll)
Memory-safety issues are a coder skill issue and a result from cheaper-than-possible testing and review processes.
Re: (Score:2)
Pocket showed me an article last night about how "Vibe coding" with AI will enable anyone to write software. Google is just trying to get the vibe right for AI to do the testing and review.
~Goog Vibrations~
Re: (Score:2)
So this is just another push to try to make "AI" look useful? Makes sense.
Re:That is just bullshit (Score:4, Insightful)
Re: (Score:2)
The thing is, you either need what absence of memory safety gives you in control, performance and hardware access, and then it is a coder skill, testing and review issue. Meaning things being done cheaper than possible is the root-cause for all the problems. Or you do not need the advantages and can just code in a memory safe language. These have been around forever.
Incidentally, what you are asking for is called a "Harvard Architecture" ( https://en.wikipedia.org/wiki/... [wikipedia.org] ) and it is quite old. Never reall
Re: (Score:2)
Re: (Score:2)
Indeed. As C will not go away and will not become memory-save, coding in C (or C++) should be restricted to where you actually need its advantages. And it has numerous advantages that a memory-save language cannot give you and lots of scenarios where it is the best choice. Hence for those cases, better skills, better processes and likely liability and formal skill requirements are needed.
For cases where the restrictions that necessarily come with memory-safety are not important, use a memory-save language.
Haha, right, that's the trust problem. (Score:2)
Most of what erodes public trust(still probably less than it deserves to be eroded) is technology performing ex
Re: (Score:2)
Here are the guidelines. Use our AI to code it. $$ (Score:2, Interesting)
Google is driven by revenue and information vacuuming, not goodness. This will probably be an attempt to drive developers into using Google AI coding tools. Then people will get to pay for the pleasure of meeting Google's requirements and train Google's AI coding tools.
"Trust" in Trashdot space filler posts? YGTBSM. (Score:1)
Why would bugs "erode trust" when users don't even know what the term means and coders understand they're just bugs to be dealt with?
How much do the so-called editors get paid to vomit such garbage onto Slashdot? It's not news for nerds nor news for normals either.
Re:"Trust" in Trashdot space filler posts? (Score:3)
I'm just kind of stunned by this comment. So the question is whether or not security vulnerabilities in software... actually create any concerns about the security of software?
>
Users?! I took that statement (made by Google's security researchers) as saying the erosion of trust is happening "across the industry" -- so, among software professionals. ("For decades, memory safety vulnerabilities have been
Why does data still share memory with code? (Score:5, Interesting)
Yes there will be limitations. Can't be much worse than the current strategy of a flat landscape and trying to keep artificial memory walls patched together.
Re: (Score:3)
Page tables supporting hardware-enforced no-execute pages is ubiquitous now.
The problem is solved.
I worked with a Zilog about a decade back that still had Harvard architecture. Enjoy.
Re: (Score:2)
You might not see this because you are working at a very high level,where it appears that all you have is one big chunk of main memory. A good way to think about all this is that RAM is like a file on a server around the world, and the various cache hierarchies are intermediate files on local servers, and the CPU's memory is actually only a tiny cache line.
Another framework? (Score:2)
Things like this always sound to me like politicians setting up another commitee, namely just as useless, costly, and only to the benefit of themselves.
Maybe instead of throwing more bureacracy at it, invest in programmers and people who are actually good at organising the code base to make it readable and maintainable, and ignore the latest hype AI, coding methodologies, shiny language features, and anything sales people are trying to offload.
Re: (Score:3)
Double speak (Score:2)
Google calls for memory-safety standards !
Inspiring - sounds like big tech practicing good citizenship.
It's about time.
But I fear that it might be a hoodwink.
-------
Just in the past 2-3 days, Slashdot has run these articles:
Google Tweak Creates Crisis for Product-Review Sites
https://tech.slashdot.org/stor... [slashdot.org]
Google's AI Previews Erode the Internet, Edtech Company Says In Lawsuit
https://yro.slashdot.org/story... [slashdot.org]
Microsoft Begins Turning Off uBlock Origin, Other Extensions In Edge
(which also talks about Google
My memory or the computer's? (Score:1)
When I started skimming the info, my first thought was about improving the safety of my memory. Can Google do anything about that?
I donâ(TM)t like the acronym (Score:1)
Sounds like someone mispronouncing âoecherryâ.
A better idea would be to expand the R, which is an acronym by itself. Then it would be âoeCHERISCIâ.
The final âoesciâ should be pronounced like in Italian (she), so it would sound like âoecherish-eâ.
Much better.
Maybe if we stopped developing like it's 1968? (Score:2)
You Must Do it Our Way! (Score:2)