Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming Security

Google Calls for Measurable Memory-Safety Standards for Software (googleblog.com) 42

Memory safety bugs are "eroding trust in technology and costing billions," argues a new post on Google's security blog — adding that "traditional approaches, like code auditing, fuzzing, and exploit mitigations — while helpful — haven't been enough to stem the tide."

So the blog post calls for a "common framework" for "defining specific, measurable criteria for achieving different levels of memory safety assurance." The hope is this gives policy makers "the technical foundation to craft effective policy initiatives and incentives promoting memory safety" leading to "a market in which vendors are incentivized to invest in memory safety." ("Customers will be empowered to recognize, demand, and reward safety.")

In January the same Google security researchers helped co-write an article noting there are now strong memory-safety "research technologies" that are sufficiently mature: memory-safe languages (including "safer language subsets like Safe Buffers for C++"), mathematically rigorous formal verification, software compartmentalization, and hardware and software protections. (With hardware protections including things like ARM's Memory Tagging Extension and the (Capability Hardware Enhanced RISC Instructions, or "CHERI", architecture.) Google's security researchers are now calling for "a blueprint for a memory-safe future" — though Importantly, the idea is "defining the desired outcomes rather than locking ourselves into specific technologies."

Their blog post this week again urges a practical/actionable framework that's commonly understood, but one that supports different approaches (and allowing tailoring to specific needs) while enabling objective assessment: At Google, we're not just advocating for standardization and a memory-safe future, we're actively working to build it. We are collaborating with industry and academic partners to develop potential standards, and our joint authorship of the recent CACM call-to-action marks an important first step in this process... This commitment is also reflected in our internal efforts. We are prioritizing memory-safe languages, and have already seen significant reductions in vulnerabilities by adopting languages like Rust in combination with existing, wide-spread usage of Java, Kotlin, and Go where performance constraints permit. We recognize that a complete transition to those languages will take time. That's why we're also investing in techniques to improve the safety of our existing C++ codebase by design, such as deploying hardened libc++.

This effort isn't about picking winners or dictating solutions. It's about creating a level playing field, empowering informed decision-making, and driving a virtuous cycle of security improvement... The journey towards memory safety requires a collective commitment to standardization. We need to build a future where memory safety is not an afterthought but a foundational principle, a future where the next generation inherits a digital world that is secure by design.

The security researchers' post calls for "a collective commitment" to eliminate memory-safety bugs, "anchored on secure-by-design practices..." One of the blog post's subheadings? "Let's build a memory-safe future together."

And they're urging changes "not just for ourselves but for the generations that follow."

Google Calls for Measurable Memory-Safety Standards for Software

Comments Filter:
  • by phantomfive ( 622387 ) on Saturday March 01, 2025 @10:51AM (#65203347) Journal

    Memory safety bugs are "eroding trust in technology and costing billions,"

    I approve their efforts to measure things objectively. However, memory bugs are not eroding trust in technology. The vast majority of people don't even know bugs exist. I would like to see their objective attempt to measure trust in technology.

    • They get reminded monthly when Windows tells them to "update and restart."

      As to the difference between a memory safety bug and any other bug that could drain their bank account behind their back of they don't OBEY AND UPDATE AND RESTART NOW, they probably neither know or care.

  • by AcidFnTonic ( 791034 ) on Saturday March 01, 2025 @11:06AM (#65203359) Homepage

    As a C++ developer with a large vast codebase, count me in.

    Id really prefer if they worked on a nice valgrind replacement that could use multiple cores during heavy tracing too.

    • I don't understand why a company like Google doesn't just make their own Library of memory safe constructs and tell everyone to use that.
      • I don't understand why a company like Google doesn't just make their own Library of memory safe constructs and tell everyone to use that.

        You may have missed that Google did just that. Google employees wrote "Comprehensive Rust" [github.io], a tutorial for a memory-safe language.

        • That's rust, but why not in c++?
          • Can't really be done in C++.

            Let's suppose, for example, that you wrote a "memory safe" vector. If you pass any of the vector elements by reference to a function, especially in a concurrent context, how is the function supposed to know that the pointer is still valid at any given moment?

            It just can't. Why? Among other things, the language lacks features like lifetime annotations, and adding a borrow checker would totally negate the only remaining argument for continuing to use C++ to begin with. To even come

            • Actually the c++ library has smart pointers. You just forbid raw pointers.
              • Of course it does, and not only did I allude to that, but I even named a few. For example, in C++ what you call shared_ptr is just an atomic reference counter (which, by the way, Rust has both a regular reference counter and an atomic reference counter, namely because you don't always need atomics, which not only are not present in every platform, but carry their own overhead.) What you call weak_ptr makes use of locking behavior, among other things.

                https://stackoverflow.com/a/61... [stackoverflow.com]

                Now you might be shouting

                • Ok you keep talking about rust, but I'm talking about fixing c++ code rather than porting it all to rust.
                  • The only other solution is to add hardware based memory destruction. Which will help address the problem of high cost c++ smart pointers, but it's not a silver bullet, doesn't actually fix the problem of bad code, is not a truly zero cost abstraction, requires you to throw out existing hardware, which is an even worse idea than throwing out existing code.

                    In fact, WG21 has already put a lot more thought and effort into this than you could ever possibly hope to get from slashdot, and they can't even come up w

                • Oh and more on unique_ptr:

                  https://youtu.be/rHIkrotSwcc&t... [youtu.be]

  • "That's not a plan, that's a goal"

  • Standards are good
    Improving technology is good
    Memory safety is good
    But, when the discussion turns to "the technical foundation to craft effective policy initiatives and incentives promoting memory safety", I get skeptical
    If bureaucrats craft complex rules that are so difficult and expensive to follow that only giant corporations can deal with them, it will kill small projects, either commercial or open source
    A large part of the success of digital technology is the ability of small developers to freely inven

    • So it's a win-win for Google! Next week; a proposal from 4-6 of the largest 'cloud' and software vendors to formalize an oligopolists for safety consortium; but don't worry; it's still competitive because you can maintain certification by sharecropping on any one of them at the price they choose!
  • Memory-safety issues are a coder skill issue and a result from cheaper-than-possible testing and review processes.

    • Pocket showed me an article last night about how "Vibe coding" with AI will enable anyone to write software. Google is just trying to get the vibe right for AI to do the testing and review.

      ~Goog Vibrations~

    • by Fly Swatter ( 30498 ) on Saturday March 01, 2025 @12:31PM (#65203507) Homepage
      Sure it can be a coder skill issue, but if it's such a problem maybe it is time to design hardware architecture that simply doesn't allow mixing code and data memory. RIght now it's all bandages and duct tape.
      • by gweihir ( 88907 )

        The thing is, you either need what absence of memory safety gives you in control, performance and hardware access, and then it is a coder skill, testing and review issue. Meaning things being done cheaper than possible is the root-cause for all the problems. Or you do not need the advantages and can just code in a memory safe language. These have been around forever.

        Incidentally, what you are asking for is called a "Harvard Architecture" ( https://en.wikipedia.org/wiki/... [wikipedia.org] ) and it is quite old. Never reall

      • The solution is either better training, or a system that doesn't let the errors happen. (Or you can do what we do now, and just accept them as part of life).
        • by gweihir ( 88907 )

          Indeed. As C will not go away and will not become memory-save, coding in C (or C++) should be restricted to where you actually need its advantages. And it has numerous advantages that a memory-save language cannot give you and lots of scenarios where it is the best choice. Hence for those cases, better skills, better processes and likely liability and formal skill requirements are needed.

          For cases where the restrictions that necessarily come with memory-safety are not important, use a memory-save language.

  • I'd certainly be in favor of fewer bugs and less frenetic patching; but it seems transparently absurd to claim that memory safety issues, or even bugs in general, are what erode people's trust in technology, outside of fairly niche security circles where everyone either pokes holes in things for fun or distrusts even more deeply the things that must be hiding holes because they aren't known yet.

    Most of what erodes public trust(still probably less than it deserves to be eroded) is technology performing ex
    • Y2K eroded public trust in technology, and that was largely a fantasy (the public perception of Y2K as opposed to the actual issues).
  • Google is driven by revenue and information vacuuming, not goodness. This will probably be an attempt to drive developers into using Google AI coding tools. Then people will get to pay for the pleasure of meeting Google's requirements and train Google's AI coding tools.

  • Why would bugs "erode trust" when users don't even know what the term means and coders understand they're just bugs to be dealt with?

    How much do the so-called editors get paid to vomit such garbage onto Slashdot? It's not news for nerds nor news for normals either.

    • > Why would bugs "erode trust"...

      I'm just kind of stunned by this comment. So the question is whether or not security vulnerabilities in software... actually create any concerns about the security of software?

      > ...when users don't even know what the term means...

      Users?! I took that statement (made by Google's security researchers) as saying the erosion of trust is happening "across the industry" -- so, among software professionals. ("For decades, memory safety vulnerabilities have been
  • by Fly Swatter ( 30498 ) on Saturday March 01, 2025 @12:28PM (#65203495) Homepage
    Yea I know, because it's efficient. But by now someone should be working on an architecture that has two memory pools - one strictly for executable code and the other for data. With the underlying architecture designed so that is impossible to mix the two.

    Yes there will be limitations. Can't be much worse than the current strategy of a flat landscape and trying to keep artificial memory walls patched together.
    • Because those "artificial memory walls" work, except in cases where the implementation had a well known flaw (Meltdown is the only one that comes to mind)
      Page tables supporting hardware-enforced no-execute pages is ubiquitous now.
      The problem is solved.

      I worked with a Zilog about a decade back that still had Harvard architecture. Enjoy.
    • FYI: Memory is already split between instruction caches and data caches [wikipedia.org]

      You might not see this because you are working at a very high level,where it appears that all you have is one big chunk of main memory. A good way to think about all this is that RAM is like a file on a server around the world, and the various cache hierarchies are intermediate files on local servers, and the CPU's memory is actually only a tiny cache line.

  • Things like this always sound to me like politicians setting up another commitee, namely just as useless, costly, and only to the benefit of themselves.

    Maybe instead of throwing more bureacracy at it, invest in programmers and people who are actually good at organising the code base to make it readable and maintainable, and ignore the latest hype AI, coding methodologies, shiny language features, and anything sales people are trying to offload.

  • Google calls for memory-safety standards !

    Inspiring - sounds like big tech practicing good citizenship.
    It's about time.
    But I fear that it might be a hoodwink.

    -------
    Just in the past 2-3 days, Slashdot has run these articles:

    Google Tweak Creates Crisis for Product-Review Sites
    https://tech.slashdot.org/stor... [slashdot.org]

    Google's AI Previews Erode the Internet, Edtech Company Says In Lawsuit
    https://yro.slashdot.org/story... [slashdot.org]

    Microsoft Begins Turning Off uBlock Origin, Other Extensions In Edge
    (which also talks about Google

  • When I started skimming the info, my first thought was about improving the safety of my memory. Can Google do anything about that?

  • Sounds like someone mispronouncing âoecherryâ.

    A better idea would be to expand the R, which is an acronym by itself. Then it would be âoeCHERISCIâ.

    The final âoesciâ should be pronounced like in Italian (she), so it would sound like âoecherish-eâ.

    Much better.

  • You know what the argument used to be? Garbage collection. There were what we would today call scripting languages which made it all but impossible to have memory bugs, as well as other common bugs such as data type mixups. Yes those languages were slower, but when compilers were made for them they weren't all that slow, and still the entire industry insisted on continuing to program using a language whose every feature was pared down to make it fit in the memory of a PDP-8, in a day when corporate custo
  • While computing his history is full of people who are too impatient to care about data safety or security--I loathe to think what they already have planned. I suspect they have something all worked out, and they want their jump in the race to be used in the best way. Still, it reminds me of the joke about the preacher who raised his hand high and proclaimed, "If any of you have committed adultery, may your tongue cleave fast and stick to the woof of yer malf."

If the facts don't fit the theory, change the facts. -- Albert Einstein

Working...