Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Chrome Chromium Programming Security

Chromium Project Finds 70% of Its Serious Security Bugs Are Memory Safety Problems (chromium.org) 154

"Around 70% of our serious security bugs are memory safety problems," the Chromium project announced this week. "Our next major project is to prevent such bugs at source."

ZDNet reports: The percentage was compiled after Google engineers analyzed 912 security bugs fixed in the Chrome stable branch since 2015, bugs that had a "high" or "critical" severity rating. The number is identical to stats shared by Microsoft. Speaking at a security conference in February 2019, Microsoft engineers said that for the past 12 years, around 70% of all security updates for Microsoft products addressed memory safety vulnerabilities. Both companies are basically dealing with the same problem, namely that C and C++, the two predominant programming languages in their codebases, are "unsafe" languages....

Google says that since March 2019, 125 of the 130 Chrome vulnerabilities with a "critical" severity rating were memory corruption-related issues, showing that despite advances in fixing other bug classes, memory management is still a problem... Half of the 70% are use-after-free vulnerabilities, a type of security issue that arises from incorrect management of memory pointers (addresses), leaving doors open for attackers to attack Chrome's inner components...

While software companies have tried before to fix C and C++'s memory management problems, Mozilla has been the one who made a breakthrough by sponsoring, promoting and heavily adopting the Rust programming language in Firefox... Microsoft is also heavily investing in exploring C and C++ alternatives⦠But this week, Google also announced similar plans as well... Going forward, Google says it plans to look into developing custom C++ libraries to use with Chrome's codebase, libraries that have better protections against memory-related bugs. The browser maker is also exploring the MiraclePtr project, which aims to turn "exploitable use-after-free bugs into non-security crashes with acceptable performance, memory, binary size and minimal stability impact."

And last, but not least, Google also said it plans to explore using "safe" languages, where possible. Candidates include Rust, Swift, JavaScript, Kotlin, and Java.

This discussion has been archived. No new comments can be posted.

Chromium Project Finds 70% of Its Serious Security Bugs Are Memory Safety Problems

Comments Filter:
  • safe? (Score:2, Interesting)

    by gbjbaanb ( 229885 )

    "safe languages, including javascript"

    lololol

    Why can't they just make some C/C++ extension tot he language that gives them memory-0safe allocations. Even Rust ends up making the same memory allocations as C++, the only difference is how the lanaguge manages them. There is no reason C cannot do the same. And by adding those memory "safe" mechanisms, maintain their existing investment in the current, already debugged and checked, codebase.

    Rust doesn't seem to have anything C++ doesn't already have anyway, ref

    • Re:safe? (Score:5, Insightful)

      by augo ( 6575028 ) on Sunday May 24, 2020 @09:48AM (#60098394)

      Rusts main memory safety feature is not reference counting. It is memory ownership tracking.

      • by Uecker ( 1842596 )

        Which could be added to C.

        The main problem with C is that traditionally both programmers and compilers traded speed for safety. But one can write very safe C if one wants to, e.g.by using a library for string handling instead of using char pointers directly. Also compilers and other tools can nowadays find many issues, e.g. one can use -fsanitize=undefined, valgrind, etc. If you use those tools right and follow some rules, I do not think there is a huge difference to Rust in terms of safety.

        BTW, you can g

        • Crapping on C is du jour. The specification is short and explains things in terms that a lay person can relatively understand. Why needlessly complicate things to account for incompetence?
        • I don't use low level languages much (mainly sticking to python and java right now) but one thing I found impressive about Rust is how it makes it so that I can safely reuse variables at compile time if I want to, much like a high level language does, and it does it all without the need for a garbage collector.

          Admittedly, my low level experience is mostly limited to arduino stuff. I've dabbled with C but I've had such little use for it that I don't retain it very well after I'm done with it, and I've been t

    • Indeed there are plenty of ways to address it.
      There are garbage collectors that can be used which automatically intercept any call to free().

      Heck even just using this instead of free() takes care of most of it:

      void free_and_clear(void *ptr);
      if (ptr != NULL) {
      free(*ptr);
      *ptr = NULL;
      }
      }

      Just search and replace all of your calls to free( with free_and_clear

      • Indeed there are plenty of ways to address it.
        There are garbage collectors that can be used which automatically intercept any call to free().

        Nope. The correct answer is to stop using malloc() and free(). There's absolutely no need for it in 2020. Use a language feature that obeys RAII principles instead.

        • That's a great idea. If you're starting a fresh code base today, a brand new product. RAII, in C++ or another language, works well for certain kinds of objects (on the stack) in certain contexts.

          It might even work for some internal parts if you're writing a brand new module for an existing system, but the existing system is going to have structs passed in and out of modules, structs which contain traditional data types.

          The heap is also a thing, and scope isn't always as clear-cut as RAII would like it to

          • Stopping use of malloc() and free() also implies using a different mechanism to replace them.

            Even in existing codebases you can replace malloc() with a function called createXXX() and an associated function called destroyXXX() that does initialization and finalization of the block of memory returned by malloc().

            That way you can check for leaks when the program terminates, multiple frees of the same block while it's running, etc., and you know the block of memory returned to the main program will be initiali

            • I don't use malloc/free/new in my C++ code when I am forced to use it. But I would rather utilize1990s technologies now present in Java or .Net.

              What, exactly, is the real difference between RAII and Rust? Certainly easier to RAII parts of an old application than to rewrite them in a new language.

              What I really want is compilers warnings about code like
                  log("the answer is " + 42);

              -WNoPointerArithmetic -WNoUninitializedStuff
              etc.

              • What, exactly, is the real difference between RAII and Rust?

                Not enough to make me switch to Rust.

                Rust has its own problems, too, things like "moved variables can be a pain".

                Certainly easier to RAII parts of an old application than to rewrite them in a new language.

                Exactly.

                • Should read "moved" variables...

                  Rust variable ownership is more like C++ std::auto_ptr and can be a pain to use.

        • Can you tell me how mALLOC does not conform to "resource ALLOCation is initialization"? Why do you need all of these functions to be abstracted away either by the kernel or whatever compiler / interpreter you are using?
    • The language already supports this in the Standard Template Library. The problem is roll-your-own developers or lower level libraries used to handle I/O to memory. Some of these flaws are conflated because they include hardware bugs solved by software as a "memory" issue.
    • "safe languages, including javascript"

      lololol

      Why can't they just make some C/C++ extension tot he language that gives them memory-0safe allocations.

      C can't.

      C++ can (and often does, eg. it's been enabled by default in Visual C++ for a number of years)

      I'm sure this report is mixing up the two languages and that most of the problems are in C code and not C++ code. This may not be the fault of Chrome programmers as the basic APIs of windows are all based around C code (DLL files, COM, etc.)

    • > Rust doesn't seem to have anything C++ doesn't already have anyway, reference counted pointers and smart pointers

      Rust has ownership model with object lifetimes and move semantics by default. This is very unique and nothing like anything else. Yes, it also has reference counting: Rc and Arc; but because they imply runtime overhead it's not something that is considered as idiomatic usage.

      > eg their Rc reference counted smart pointer is only for use within a thread, so someone will end up sharing
    • by ljw1004 ( 764174 )

      I imagine the areas where memory allocations go wriong will also go wring in Rust - eg their Rc reference counted smart pointer is only for use within a thread, so someone will end up sharing one outside the thread and end up with the same memory problems they had anyway.

      That's actually incorrect, and is one of the must beautiful/essential/novel parts of Rust... (1) the "ownership" part of the type system ensure, at compile-time, that one thread can't mutate a data-structure while other threads read it. (2) the "lifetime" part of the type system ensures, at compile-time, that one thread can't release memory while another thread still tries to read it. (3) The reference-counting libraries in Rust use ownership to enforce thread-safety too.

    • TFA is stupid. Chromium has very little C in it if it does. From what I've read through the source of Chromium, a few versions at least, is that all of it is either C++ or JavaScript. No C at all.
    • by swilver ( 617741 )

      They can't, adding another extension to C/C++ would make it implode under its own weight.

  • Is how the hell they didn't had a memory safe library before even starting the project?

    • If my memory serves right: The engine Chrome uses was derived from Apple's which in turn derived from KDE's html engine. That's the reason those are open source btw.
      So, no, Google didn't start the engine from scratch so they couldn't have designed it to prevent those problems from the beginning.
    • by roca ( 43122 )

      Because there is no C or C++ library even today that guarantees all memory safety bugs are prevented.

  • Nonsense (Score:4, Insightful)

    by OneSmartFellow ( 716217 ) on Sunday May 24, 2020 @09:52AM (#60098406)

    C++ has had totally adequate "memory safety" since the adoption of boost shared pointers. It continues to improve with move semantics in later versions.

    A big part of memory safety lies in the OS level (or at least what most people consider the OS level) anyway. Loader address randomization and other features are just as important.

    But, but, Rust ..... It's the flavor of the month ! So, it must be better, even if it's decades less mature.

    yawn...

    • Meant to say shared pointers and other improvements.

    • It's pretty much impossible to write a tool which ensures it's only used in a safe manner (see the MISRA C/THREADX failure). If you have to rely on coding conventions you've already failed.

    • Re:Nonsense (Score:4, Insightful)

      by Gerald Butler ( 3528265 ) on Sunday May 24, 2020 @10:28AM (#60098492)

      You make the same mistake that leads to this 70%. You believe that if you can do it right, in principle, that means everyone will do it right. These stats show that that assumption is false. Either the compiler/language prevents these things effectively, or, it doesn't. Relying on everyone involved to do everything right all the time doesn't work without something double-checking for well-known problems. In engineering, there is a concept called "Fail-Safe". C, C++, and almost ALL currently used language, "Fail-Unsafe", by default. We need to be using "Fail-Safe" languages.

      Unless you are willing to put funds in escrow and/or pay for significant insurance for the code you are writing, you aren't really confident in your ability to not make mistakes. I would rather use a "Fail-Safe" language before I would ever be willing to do that.

  • Bad practices (Score:5, Interesting)

    by johannesg ( 664142 ) on Sunday May 24, 2020 @09:55AM (#60098414)

    According to this Google document (https://docs.google.com/document/d/1Cv2IcsiokkGc2K_5FBTDKekNzTn3iTEUyi9fDOud9wU/edit), they are still using naked new/delete pairs. C++ has considered that to be a bad practice since at least C++11 (and the use of RAII is much older still), preferring to use std::unique_ptr and std::shared_ptr instead. Doing so completely automates the effort of freeing memory, so you cannot forget to free, nor have double frees.

    It's easy to ask for a rewrite in another language that is nominally safer, but it's probably much cheaper to finally start using basic best practices.

    • Re:Bad practices (Score:4, Interesting)

      by Nemosoft Unv. ( 16776 ) on Sunday May 24, 2020 @10:49AM (#60098552)

      Unfortunately, the use of std::unique_ptr and std::shared_ptr in code isn't as easy as I had hoped. The main problems I encountered are:

      • Creating an object as a shared/unique_ptr isn't as straightforward as a "new Object"; it's always a two-stage process.
      • The use is extremely invasive: you must use shared_ptr everywhere or your code becomes a mess with dereferences, casts, weak_ptr, etc.
      • Related to the previous point is that external libraries never use a shared_ptr/weak_ptr in their calls; only naked pointers.
      • There's a range of 'automatic' pointers to choose from: auto_ptr, weak_ptr, shared_ptr, unique_ptr. Which is the best choice is not always clear. For example, look at the descriptions for unique_ptr and shared_ptr; they are very similar and they don't explain when you should use either of them.
      • More typing: "function do_something(std::weak_ptr var)" doesn't roll of the tongue (or keyboard) as easily.

      Finally, a small niggle: in earlier standards auto_ptr was pushed as "the" automatic pointer type but that has been deprecated since C++11; leaving the cleanup / conversion to you.

      In any case, my opinion: nice concept, but it just feels clumsy.

      • by LubosD ( 909058 )

        Creating an object as a shared/unique_ptr isn't as straightforward as a "new Object"; it's always a two-stage process.

        Not true ;-) Try std::make_shared(constructor, args, ...)

        There's a range of 'automatic' pointers to choose from: auto_ptr, weak_ptr, shared_ptr, unique_ptr. Which is the best choice is not always clear. For example, look at the descriptions for unique_ptr and shared_ptr; they are very similar and they don't explain when you should use either of them.

        I find it very straightforward. Only one

        • Due to job changes, I've recently returned to C++ from nearly a decade of Java & C#. It has been kind of a culture shock, completely the opposite of when I learned C++ coming from C. A lot of updates since I last worked with this language.

          I'm currently creating a library from scratch, and once I understood how to use std::unique_ptr, and ensured the library was almost all stateless, I find it works quite well (`using std::unique_ptr;` makes it easier). At about the 90% point of development, Valgrind fin

          • So how do you handle references to inside a class. That is where I fell down as a Java/.Net programmers.

            Class foo {
            Bar bar....
            Bar &getBar(){return &bar):
            }

            getBar() leaks a pointer. And using Shared Pointers everywhere seems way overkill

            The problem is that C++ gets references wrong. They should be like Var params in Pascal or ByRef params in Visual Basic. Can not be moved out of scope, at least not without an & operation that produces a compiler warning.

            • by halivar ( 535827 )

              getBar() doesn't leak because the backing variable bar is not a pointer. It will be allocated and deallocated with its parent Foo object. Nine times out of ten, this is probably the best thing to do.

              But if you DO need bar to be pointer to Bar, and you need to return a reference to &bar as Bar*, then that is where the PIMPL design really shines. The wrapper object (of type, say, "PBar", for example) can passed and returned by value 90% of the time with no performance penalty, thanks to copy elision and R

              • by halivar ( 535827 )

                As an addendum: in the above example, I said the pointer is not leaked. This is strictly true. But it can lead to a reference to a destroyed object (what you would call a NRE in Java and C#). Thanks to RVO/NRVO, this is much, much less of a concern. If you have a factory function that creates a PBar object, you can create it on the stack in the function, return it as an unadorned PBar, and it will be guaranteed to be returned by the function without being popped off the stack just for leaving scope. This wo

      • Your list reads like you've found the language features and have tried using them yourself without looking at any of the best practice guides for these features. There's even guides for moving to C++11 or 'modern C++' or whatever it ends up being called which suggest ways to get there from a large pre-C++11 codebase. It's not hard, but it's not worth inventing your own approaches.

      • Having actually coded with these tools, it's a matter of mindset and architecture. Decide up front what you're going to use. Don't *discover* language features as you code. That's a recipe for disaster. Once you have your paradigm set, stick to it. It's easy to code once you get into the habit. It's the inertia/laziness of switching that makes things like hard or clumsy. In practice, it's not bad at all. P.S. Once you start using the standard template library, you'll wonder why you ever did without. It's l
        • > It's less effort than learning a new language that STILL needs you to interface with C++ for the things that the language deems unsafe but is still required to do real work.

          Rust does not require you to interface with C++ except to use existing C++ libraries. It does not require going outside the language to use unsafe constructs. But calling out from Rust is inherently unsafe in the sense that the compiler cannot offer any safety guarantees about the code you're calling.

          Rust is hard work though.

      • In reference to external libraries - then wrap the external library pointer for yourself. The tool is there. Do some housekeeping. Or find a better library.
      • Creating an object as a shared/unique_ptr isn't as straightforward as a "new Object"; it's always a two-stage process.

        What do you mean? There's std::make_shared and std::make_unique, sure a few more characters than new, but not much of a burden.

        The use is extremely invasive: you must use shared_ptr everywhere or your code becomes a mess with dereferences, casts, weak_ptr, etc.

        Sounds like you're misusing them to be honest. You need them when you have a question over ownership. If you just want to pass a poin

    • Their codebase is older than C++11, but anyway even if they got rid of all those memory bugs, they'd still have the 30% remaining.
  • Memory related problems are what garbage collected language prevents and solves without an effort.
    GoLang comes to mind and here we see what? Google does not see it as an obvious candidate.

    Wow! Just wow!

    • by gweihir ( 88907 )

      Garbage collection is not not a silver bullet. If the coders are incompetent (obviously the case here), garbage collection just turns security issues into performance and reliability (i.e. visible) problems. That explains readily why they are not moving to a language with garbage collection.

      The dirty secret of garbage collection is that in complex systems, it makes memory management harder.

      • by dragisha ( 788 )

        Of course it is no silver bullet. But it is a great start.

        If one has incompetent programmers, no language change will help them. GC will help reasonably competent people.

      • You really think the developers of Chromium are just incompetent? Maybe there are other reasons why the code has memory safety bugs.

        > The dirty secret of garbage collection is that in complex systems, it makes memory management harder.

        Garbage collection is just fine for lots of complex systems. But there are situations where it's not a good fit.

    • Memory related problems are what garbage collected language prevents and solves without an effort.

      So does C++, if you bother to use it.

      I haven't done any manual memory management in C++ for the last 20 years.

      • by swilver ( 617741 )

        C++ basically has what most automatic memory managed languages have abandoned after their first generation (early 90's). Except its all bolted on with huge gaping holes that you should ignore.

  • "at the source"... (Score:3, Insightful)

    by gweihir ( 88907 ) on Sunday May 24, 2020 @10:13AM (#60098462)

    The source is incompetent and cheap coders. If it is 70%, they have a massive problem with hiring the wrong people. Apparently their coders are not even competent enough to use the various existing tools that find these problems.

    • Re: (Score:3, Funny)

      Sure. You are perfect and everyone else is incompetent. Everyone involved knows how perfect you are. No need to mention it. It's a well known fact. If you look up infallible in the dictionary, you will clearly see your portrait. Unfortunately, you're only 1 person, and you don't have enough time to write all the software that the world needs.

      It's a sad truth that we need to rely upon incompetents like me to create and maintain software that you haven't yet had the time or inclination to get to. I'm

    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Sunday May 24, 2020 @10:45AM (#60098544)
      Comment removed based on user account deletion
      • by gweihir ( 88907 )

        There is a bit of a difference between "perfect" and "70% of security problems are memory safety".

        • There's some aspect of probability here I suspect you don't understand.

          "70% of security problems are memory safety" says absolutely nothing about the rate of security problems per engineer. All it means is that when a bug causes a security problem 70% of the time it involves memory safety. From TFS 125/130 of serious security bugs in the past year are of this type. That's 96%! Your flawed logic would say that Chrome engineers have gotten even more incompetent, but that's not what happened -- instead, they g

      • If you've been in the programming business for only 20 years, it's very possible you've never met a good programmer, period. They are rare.

        And they are becoming rarer, the vast majority of programmers these days don't even like programming, they do it for the money. Ask them if they are interested in learning something that doesn't make them money, like assembly, and they will say, "When will I ever use that?"
    • The source is incompetent and cheap coders ...

      What browser do you use to write your comments?

    • Nonsense, the most competent programmers in the world make the same mistakes and they lurk in the huge codebases of things that run the internet. Proof of my assertion is the changelogs of Linux, the BSD, web servers, etc.

      There are not tools that "find all the problems."

    • bullshit, all coders make mistakes, especially on large complex codebases which can see you modifying code written by someone else who will have forgotten more than he remembers about the code or who isn't even available to talk to. Code analysis tools are great and do find a lot of mistakes, but they don't catch everything.
    • Apparently their coders are not even competent enough to use the various existing tools that find these problems.

      Sure. Two of the biggest software companies in the world, Google and Microsoft, know nothing about software development. That seems unlikely.

      I think what's far more likely here is that you simply don't understand real world software development.

  • You need to compare two memory areas? Just pass the start of each area to a function. Where do these areas end? ... Let someone else care for it!

    You tell management that the code is almost complete... and management will tell you that's enough! ...

    A lot of memory bugs reflect human nature and aren't inherent to a language itself. Nor is it about memory, because the ones at the starting line get fixed fast. It's that we literally care less or not at all about what happens at the end.

    • A lot of memory bugs reflect human nature and aren't inherent to a language itself. Nor is it about memory, because the ones at the starting line get fixed fast. It's that we literally care less or not at all about what happens at the end.

      Nope.

      Any language that takes some of the burden for correctness away from the programmer and puts it into the compiler instead will be a massive help.

      • Nope.

        Any language that takes some of the burden for correctness away from the programmer and puts it into the compiler instead will be a massive help.

        No. In every new generation of developers there are some who believe that with the next language they will make less mistakes. But you don't really believe it yourself when you think you can blame the tool for your mistakes. You could already have avoided making them, but you didn't. You are then doomed to repeated your mistakes with every new language unless you start to learn how to do it right for every tool that you use.

  • by aviators99 ( 895782 ) on Sunday May 24, 2020 @10:52AM (#60098562) Homepage

    Using a "safe" language is fine, when it's possible. There are certainly cases where it's not easily done (think embedded systems, etc.).

    This must (at least additionally) be fixed in education. I stress memory leaks/corruption ad nauseam when I teach (at the University level). This needs to be drilled in early in a programmer's learning experience, even if they have no formal training. I, like many here, learned everything I know about programming outside of school. I'm not sure how to ensure people coming up like me get the message.

    I will say that it's helpful if you're on the cracking side of things. By taking advantage of memory safety issues, you are less likely to program them.

    • You can't always use safe memory access, but you can delineate the unsafe parts and let language design guarantee it for the rest. Better than trying to force safety back in through coding conventions and tools which by design can only slightly haphazardly give warnings, because of false positives/negatives which can't be avoided.

      There are good reasons to stick to C(++). After all the market accepts exploitable memory bugs as standard and believe in snake oil like MISRA, they're just as bad at assessing ris

      • There is NO good reason to stick to pure C, not if there's a C++ compiler for your machine.

        (which is "almost never" now that g++ is available everywhere that gcc is)

  • by hAckz0r ( 989977 ) on Sunday May 24, 2020 @03:16PM (#60099660)

    The core of the problem is the CPU architecture and the lack of OS support for memory safe operations.

    There once was an OS called VAX/VMS. When you allocated memory it actually allocated a descriptor which, you guessed it, described the memory it was associated with. This was not ideal for several reasons, mostly because the programmer was forced to initialize that descriptor and to keep track of it. Epic Failure, because programmers are inherently lazy people. What it did do correctly though, or at least attempted, is to manage the known size of objects and their locations in the system memory. What the designers did not see is that this tracking mechanism should have been a core feature in the operating system and thus protect the programmer from making any memory management mistakes that the descriptor was designed to prevent. The CPU itself should have had the native understanding of the size, contents, initialization, and destruction of the objects independently of the programmers API to make use of it.

    When Bill Gates started designing the Windows NT platform he hired the VAX/VMS genius to design the 'Next Generation' (NT) version of Windows, and as such it was well designed to prevent many of the current day issues, right out of the box. The problem was that the average PC did not have enough memory to support those 'memory wasting' (not my opinion) security measures, and so Mr Gates had all that security stuff (an entire CPU privileged ring level) removed for the release of Windows NT 3.0. The first version of NT that was usable with affordable hardware, and also the one with marginally better security than the insecure Windows 3.11 for workgroups.

    So just as memory was becoming cheap enough to buy for a normal PC the most secure version of Windows was instead dumbed down to match the average programmers pocketbook, with little attention paid to security at all. Lots of talking points for sales, but security holes you could drive a city bus through. Because there was no sense of ownership of memory other than User/System you could even send a message to a privileged window and literally execute privileged code from an App that was no more complex that a HelloWorld app. All a virus needed to do was to fork itself, have the AV detect the clone, suspends the clone, pops up a notification window to warn the user, and the original sends a message to to be executed by the privileged AV application, thus the virus takes over the system by using the privileged AV app for a foothold. This was not fixed for many many years, but rather Microsoft told the AV companies no to create any privileged windows. The underlying flaw existed for many years after that.

    What is needed today is a new CPU architecture that natively understands the allocation system and an OS that keeps track of it. Each thread in the CPU should have its own encryption key in a CPU register and the compilers should generate code that recognizes the ownership and builds constructs for the threads to safely share OS managed data buffers. If you didn't allocate that memory and were not given expressly given permissions by the programmer then that memory is off limits. It does not exist to you. Then any language you design on top of that paradigm will be a memory safe language by definition. Buffer overruns will not happen and even if DMA arbitrators were to trip over into the next memory arena the OS would trap the exception and terminate the hardware operation. It would not be the programmers responsibility to do it right, because they simply would not be permitted to do it wrong.

    We could be doing this today, but everyone is fixated on one specific CPU architecture upon which, try as you might, you simply can not write secure code in any language. You can not build a high-rise luxury complex on top of a mud hut foundation.

  • The problems are almost entirely wattage problems. If you throw out (fire) the dim bulbs and keep the bright bulbs, the problems will solve themselves. Though you will probably have to get rid of all the IDE's and Algorithmic Interference (AI) as well.

  • On Intel hardware and under Microsoft Windows!"

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...