JavaScript Attack Breaks ASLR On 22 CPU Architectures (bleepingcomputer.com) 157
An anonymous reader quotes a report from BleepingComputer: Five researchers from the Vrije University in the Netherlands have put together an attack that can be carried out via JavaScript code and break ASLR protection on at least 22 microprocessor architectures from vendors such as Intel, AMD, ARM, Allwinner, Nvidia, and others. The attack, christened ASLRCache, or AnC, focuses on the memory management unit (MMU), a lesser known component of many CPU architectures, which is tasked with improving performance for cache management operations. What researchers discovered was that this component shares some of its cache with untrusted applications, including browsers. This meant that researchers could send malicious JavaScript that specifically targeted this shared memory space and attempted to read its content. In layman's terms, this means an AnC attack can break ASLR and allow the attacker to read portions of the computer's memory, which he could then use to launch more complex exploits and escalate access to the entire OS. Researchers have published two papers [1, 2] detailing the AnC attack, along with two videos[1, 2] showing the attack in action.
Re: (Score:2)
Re:CPUs, not CPU architecture (Score:5, Insightful)
You're confusing CPU architecture with instruction set architecture. They used to be the same (and in some cases still are) but most processors have a physical architecture that implements an ISA via microcode translation. With memory controllers (and a whole lot of other shit) on the same package. the term "architecture" has drifted even further from ISA and more toward the entire SoC.
Layman's Terms (Score:5, Funny)
In layman's terms, this means an AnC attack can break ASLR...
'cause every layman knows what ASLR is.
Re:Layman's Terms (Score:5, Informative)
'cause every layman knows what ASLR is.
I had the same thought. At first I thought it was related to digital photography. Here is what this is really all about: https://en.wikipedia.org/wiki/Address_space_layout_randomization [wikipedia.org]
In layman's terms: Keeping the locations of things in memory unpredictable so that, for example, if I am trying to exploit some arbitrary code execution flaw I can't count that my code will end up in the place I want or expect it.
Re: (Score:3)
Re:Layman's Terms (Score:5, Funny)
https://xkcd.com/221/ [xkcd.com]
http://dilbert.com/strip/2001-... [dilbert.com]
Re: (Score:1)
I really like the Dilbert one.
Any true random number generator should be able to output an infinite sequence of nines.
If it can't then it has a statistical distribution that makes it predictable.
Re: (Score:1)
Re: (Score:1)
Eh, it *is* as random as the people who wrote ASLR said it was. It is also nearly as useless as the people behind GR-SECURITY said it would be ;-)
The problem is that cache aliasing can be used to do a timing attack, and the timing attack can tell you where other important stuff is. This is a conceptual thing, so it affects pretty much everything that implemented memory caches, as they really share the same basic concepts.
The solution is also known, but *NOT* widely available in hardware (and certainly not
Re: (Score:3)
Keeping the locations of things in memory unpredictable so that, for example, if I am trying to exploit some arbitrary code execution flaw I can't count that my code will end up in the place I want or expect it.
Close but not quite right.
Its so that you can't count on OS/Host code being at a specific address. Your own code doesnt need to care what address its loaded at, even if its nefarious (every architecture has relative jump instructions.) The idea is that something like the browsers file i/o routines arent being placed at a predictable address, so your nefarious code cant just branch directly into them.
The main flaw of address randomization is that address information can still leak through the stack if y
Re: (Score:2)
Of course, the bad code could also scan the process memory space to find the relocation table.
Re: (Score:1)
So is this really all they are saying: they read stack space that is unused to get info that may have been left there from other threads because of the way cache isn't cleared
No. You arent even close to what they are saying.
The attack vector talked about is measuring the time it takes to read memory addresses. Code that has been executed recently will be in the cpu's cache(s) and therefore can be read faster.
Can I ask why someone with such poor comprehension on technical matters in reading slashdot?
Re: (Score:3)
A definition of "layman's terms":
simple language that anyone can understand [merriam-webster.com]
Note how it doesn't say "...that anyone can look up the meaning of using a search engine".
Re:Layman's Terms (Score:5, Funny)
What the hell is a search engine and how many cylinders does it have?
Re: (Score:1)
it has 8 (Score:2)
no replacement for displacement (Score:2)
Baidu on the other hand, uses a 4-cylinder Boxer search engine.
And Bing uses that pathetic GMC 3-cylinder they used in the Hummer H3.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
I told the PHB it had 13 just to quiet him. However, he now gets nervous on Fridays.
"Relax, I installed extra Flux Capacitors to shore it up. Can I have a raise?"
Re: (Score:1)
In layman's terms, this means an AnC attack can break ASLR...
'cause every layman knows what ASLR is.
Do you know what Google is? I know, it's hard right?
Re: (Score:3)
For what's it's worth, I was already familiar with that acronym. I was questioning whether a layman would be.
You seem to be confusing "Layman's terms" with "Anything that can be looked up on Google".
Re: (Score:1)
I know what Google is, and Startpage, and DuckDuckGo, amongst others, and it isn't hard.
It also isn't hard to write "Address Space Layout Randomization (ASLR)" the first time the acronym is used in a text when the total effort of all the people who are going to to look it up is likely to be much larger than the effort to type those words. Yes. that is someone else's effort and not yours when you write a text, but if others do the same you benefit from that, and the total text effort of humanity while readin
Re: (Score:2)
"Do you know what Google is? I know, it's hard right?"
Google (as a meta service) also relies on people explaining terms somewhere on the web...
Re: (Score:2)
Re: (Score:1)
All your base address are belong to us. Somebody set up us the bomb. Let's fighting love. I'm so Ronery.
Re:Layman's Terms (Score:5, Insightful)
I have trouble comprehending the small mental world you live in where all of your knowledge is equally available at all times.
There's a reason why it's polite to gloss your acronyms on first use, even in the narrowest academic publications.
Just yesterday I was reviewing the literature on machine learning. The Juergen Schmidhuber review alone begins with the following glossary:
AE: Autoencoder
BFGS: Broyden—Fletcher—Goldfarb—Shanno
BNN: Biological Neural Network
BM: Boltzmann Machine
BP: Backpropagation
BRNN: Bi-directional Recurrent Neural Network
CAP: Credit Assignment Path
CEC: Constant Error Carousel
CFL: Context Free Language
CMA-ES: Covariance Matrix Estimation ES
CNN: Convolutional Neural Network
CoSyNE: Co-Synaptic Neuro-Evolution
CSL: Context Sensitive Language
CTC: Connectionist Temporal Classification
DBN: Deep Belief Network
DCT: Discrete Cosine Transform
DL: Deep Learning
DP: Dynamic Programming
DS: Direct Policy Search
EA: Evolutionary Algorithm
EM: Expectation Maximization
ES: Evolution Strategy
FMS: Flat Minimum Search
FNN: Feedforward Neural Network
FSA: Finite State Automaton
GMDH: Group Method of Data Handling
GOFAI: Good Old-Fashioned AI
GP: Genetic Programming
GPU: Graphics Processing Unit
GPU-MPCNN: GPU-Based MPCNN
HMM: Hidden Markov Model
HRL: Hierarchical Reinforcement Learning
HTM: Hierarchical Temporal Memory
HMAX: Hierarchical Model "and X"
LSTM: Long Short-Term Memory (RNN)
MDL: Minimum Description Length
MDP: Markov Decision Process
MNIST: Mixed National Institute of Standards and Technology Database
MP: Max-Pooling
MPCNN: Max-Pooling CNN
NE: NeuroEvolution
NEAT: NE of Augmenting Topologies
NES: Natural Evolution Strategies
NFQ: Neural Fitted Q-Learning
NN: Neural Network
OCR: Optical Character Recognition
PCC: Potential Causal Connection
PDCC: Potential Direct Causal Connection
PM: Predictability Minimization
POMDP: Partially Observable MDP
RAAM: Recursive Auto-Associative Memory
RBM: Restricted Boltzmann Machine
ReLU: Rectified Linear Unit
RL: Reinforcement Learning
RNN: Recurrent Neural Network
R-prop: Resilient Backpropagation
SL: Supervised Learning
SLIM NN: Self-Delimiting Neural Network
SOTA: Self-Organizing Tree Algorithm
SVM: Support Vector Machine
TDNN: Time-Delay Neural Network
TIMIT: TI/SRI/MIT Acoustic-Phonetic Continuous Speech Corpus
UL: Unsupervised Learning
WTA: Winner-Take-All
And it's but one of dozens of fields where I stick my finger into the alphabet pie.
my CPU sense stopped tingling (Score:2)
"Javascript Attack Breaks ASMR on 22 CPU Architectures"
Re: (Score:3)
oh look. I typed "aslr" into google
Why did you do that? Because the summary was poorly written?
Re: (Score:2)
Not at all. I was referring to "laymen" in the discussion at hand, not to the acronym. But it takes 2 braincells to rub together to see that. You are obviously lacking them.
javascript is incompatible with security (Score:1, Informative)
It's been every few days since javascript even came onto the scene that we have seen some exploit using javascript as an attack vector.
It is a fundamentally flawed idea to run javascript that any random site happens to deliver to you. The number of ways that can go badly seems to be effectively endless.
If you care at all about the security of your machine, you should not be running javascript by default. This is where a bunch of people come out of the woodwork to say "but we need it to view $RANDOMSITE!
Re: (Score:1)
The "exploit" could be expressed in any language -- it has nothing to do with Javascript intrinsically. The point is that something as high level as js can perform the aslr observation.
Re:javascript is incompatible with security (Score:5, Informative)
OK, fair enough, but if it's expressed in another language (assuming it's not part of your OS) you have to explicitly get and run the malicious software. If it's javascript you get it just by visiting a web page with default browser settings.
Delivery is different, even if in theory you could get it via some other means.
Re:scripting is incompatible with security (Score:1)
That's sort of inaccurate as well, scripting itself is overspecific for executable code in general.
Re:scripting is incompatible with security (Score:5, Insightful)
Don't run code you don't trust.
Javascript is code, no matter how much your browser tries to sandbox it or put shackles on it, it's going to be flying around in your CPU if you let it run.
If you don't trust the Javascript, don't run it.
There are 3 points to this problem:
Shitty fucking developers write shitty fucking websites that NEED Javascript to function.
Shitty fucking users like shiny, stupid shit and encourage that behavior.
Shitty fucking browsers let it all run by default and focus on speed, not security to please the shitty fucking users.
(And this loops back to shitty fucking developers seeing that they can bloat up their site even more because Chrome v8247 tweaked Javascript regex performance to be 2.8% faster.)
Re: (Score:1)
Dude, at the end of the day, unless you've written ever bit of code your computer is running yourself, you'll be running code you can't trust 100%.
Whether it's some shitty JavaScript in the browser or some shitty telemetry gathering shit in the shitty OS, you might well be boned.
Re:scripting is incompatible with security (Score:4, Insightful)
Re: (Score:2)
Only terrorists need javascript. We must ban it to prevent it from sapping and impurifying all of our precious bodily fluids.
(My apologies to General Jack D. Ripper)
Re: (Score:2)
Point to me a decent web site that could be developed without JavaScript. Please do.
Re:could be (Score:1)
Re: (Score:2)
The problem is that it's designed to be insecure because it's based solely on the advertising model. Somehow the old AOL subscription model is not seeming so bad these days.
Re: (Score:3)
Don't run code you don't trust.
Not possible on a modern computer. But thanks for the advice.
Re: (Score:2)
What do you mean? A binary or grey code 11?
Re: (Score:1)
1011
I only hope (Score:2, Funny)
Somebody can tell me how I can block this attack with a HOSTS file?
Re: (Score:1)
Why was THIS modded down? This would actually work... to some degree, if you had all the ad networks in there and didn't visit any malicious sites. (At least as far as for the *JavaScript* vector that is.)
Re: (Score:2)
Why was THIS modded down? This would actually work... to some degree, if you had all the ad networks in there and didn't visit any malicious sites. (At least as far as for the *JavaScript* vector that is.)
That's basically ludicrous. You're better off disabling javascript and flash and leaving your hosts file untouched.
Actually, if you wanted a way to make the web more secure? Make all the browsers default only to Javascript 1.1 or some other ancient version with just enough built-in support for DOM tweaking to maybe update the status ticker, and then ban all cross-site loading of js files that's not HTTPS.
Re: (Score:1)
It was downvoted because it wouldn't work. A large amount of scripts come from the same domain as html and css files and images. Host based blocking does not give you the fine grained control you'd need to block one but not the others. But somebody will say use a host file to block ad servers and some other fine grained blocker for the domains that matter to which I reply why use two blockers to do the job of one? This is why he who should not be named is such a worhtless troll, his solution doesn't actuall
crazy (Score:4, Funny)
who would run anything on a machine with 22 CPUs? That's just ASKING to have your ASLR broken, right?
Re: (Score:3, Informative)
Multi-CPU motherboards existed before multi-core CPUs. Kids these days...
Personally, I would run my frying pan on a computer with 22 CPUs.
Re: crazy (Score:1)
Ohio Scientific's Challenger III had three CPU's 6800, 6502A, and a Z80. Programming in Assembly got fun...
Re: (Score:2)
Some people had Z80 expansion boards for their Apple IIe so as to run CP/M in them.
Re: (Score:2)
With a special definition of 22: ARM from several manufacturers and x86 (Intel and AMD).
So I personally count 2 (thumbs down for this way of counting).
For example they do not mention any architecture that uses hashed page tables like Power (and others). Apparently their method only works on processors in which the MMU walks a tree on a page fault, and comes from the interaction of the page table walks with the caches. Hash based page tables do not have the same properties in this respect and (at least on Po
Re:That's not archetecture (Score:1)
Re: (Score:1)
Not the whole story? (Score:3)
Re: (Score:2)
If you can read memory arbitrarily via this exploit, your sandbox is most certainly NOT secure. It's just another step to modifying memory contents after that and getting a full breakout.
This exploit looks to be especially effective against cloud architecture as it currently stands.
A whole lot of machines are inherently more compromised as a result of this, too. Because the idiot manufacturers do things like hard-locking a 64-bit system to 2GB of RAM (TOSHIBA and DELL and HP,) it makes ASLR essentially fuck
Re: (Score:2)
If you can read memory arbitrarily via this exploit
I understand the exploit lets the attacker discover the randomized addresses, and hence have the knowledge of where vulnerable stuff is loaded in memory. I suspect the notion of read protection bypass was added by the journalist.
Re: (Score:2)
If you can read memory arbitrarily via this exploit, your sandbox is most certainly NOT secure.
True, for a broader definition of the word, it isn't. What I had in mind is contained execution of code.
It's just another step to modifying memory contents after that
How? How to this help you to create a hole where there isn't one? And if there is one, shouldn't that be addressed first?
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
ASLR is not memory protection. Breaking ASLR does not give read nor write access to any memory that the process did not already have. ASLR was introduced to help mitigate a bunch of buffer overflow attacks, by not having a 'predicatable' address that the attacker could branch to to execute his malicious code.
So exactly HOW are you going to intercept that password, using a technique that would not be possible if ASLR was not broken?
Re: (Score:2)
Re: (Score:2)
Which has absolutely nothing to do with ASLR. In fact, since it was published in 2015 ASLR was most likely in use for that attack and did not stop it.
Re: (Score:2)
So how exactly does this hurt me if the VM sandbox is secure? The paper seems to imply that you need other, much worse vulnerabilities to begin with to make use of this (beyond extracting information).
Breaking out of a sandbox in a non ASLR virtual memory space is a solved problem. ASLR was developed to help prevent all the malware that worked that way.
Re: (Score:2)
Breaking out of a sandbox in a non ASLR virtual memory space is a solved problem.
Could you direct me to instructions on how I do this in my Smalltalk VM, for example?
Re: (Score:2)
Breaking out of a sandbox in a non ASLR virtual memory space is a solved problem.
Could you direct me to instructions on how I do this in my Smalltalk VM, for example?
Nope. Not without a generously funded pen testing contract.
You can find your own references on how to do it in a browser.
Re: (Score:2)
which will always exist because bugs will always exist
That's a rather pessimistic view. Memory safety of managed languages in particular does not constitute a tremendously high bar. Or do you provide opcodes for random peeks and pokes? You probably shouldn't.
Re: (Score:2)
BeauHD (Score:5, Funny)
I thought Slashdot was supposed to be a tech site. What does Javascript attacks breaking ASLR on 22 microprocessor architectures have to do with tech?
In layman's terms ... (Score:2)
Security services? Federal law enforcement with lots of funding? Government workers? Private sector? Groups of very smart people?
People with skills and a few powerful computers? People reusing code created by people with skills and one home computer?
Any news on ip ranges and time zones?
In lay terms you say ... (Score:2)
In layman's terms, this means an AnC attack can break ASLR and allow the attacker to read portions of the computer's memory, which he could then use to launch more complex exploits and escalate access to the entire OS.
Whaaaa....??
Re: (Score:2)
Whaaaa....??
I feel the same way. Where's the damned car analogy?
Re:In lay terms you say ... (Score:5, Funny)
You have got a car with Piers Morgan sitting in it. An attacker wants to head butt him in the face (trying to think of a backronym for AnC for this - I have Attacker Nuts... but I can't think of a word beginning with C that describes Piers Morgan) so, for his own protection, you choose where he sits in the car by a random process (Arsehole Seat Location Randomisation), so the chances are the attacker opens the wrong door.
Anyway, it turns out that you can tell by how the car is riding on its springs where Piers Morgan is.
Re: (Score:2)
You have got a car with Piers Morgan sitting in it. An attacker wants to head butt him in the face (trying to think of a backronym for AnC for this - I have Attacker Nuts... but I can't think of a word beginning with C that describes Piers Morgan) so, for his own protection, you choose where he sits in the car by a random process (Arsehole Seat Location Randomisation), so the chances are the attacker opens the wrong door.
Given what I've heard regarding Piers Morgan, I'd probably want to help the attacker identify the correct door!
Maybe it's time to return to LISP machines (Score:4, Interesting)
No, semi-seriously.
The concept of a LISP machine was a computer which only executed one programming language, at least only one language in which non built-in code would execute.
And that language was memory secure, in that it packaged memory use into high-level cells which referenced each other in a single standard way.
There was no way that a process could "break out" and access something else's memory. A LISP program running in one process only understood and could access its own linked memory cells.
This was enough programming freedom to program whatever you wanted, and the point is, the memory model was simple, uniform, and thereby secure.
I'm not exactly saying return to LISP machines. I'm saying return to an architecture which includes a simple and secure memory access model, with no workarounds to the high-level memory cell access permitted. This could be enforced at the machine-language level, and/or by restricting allowed programming languages to inherently memory-secure ones.
Re: (Score:2)
I'd rather not. These things are a failure in the market for a reason.
Re: (Score:1)
This is more of a side-channel attack. It looks like it works by timing how long it takes to access memory. Since cache misses are so expensive, it's easy to determine when you're encountered one, and so you can't easily guard against this.
dom
Re: (Score:1)
I've trouble imagining how this consept would prevent me to break out of this sandboxish environment otherwise LISP functions _is_ chip instructions. If they are not there is theoretical possibility to change pointers. Can you elaborate on that idea?
Re: (Score:2)
If you cannot share memory between processes, you take a performance hit every time you need to share data. For some applications, this is a deal-breaker.
If you can share memory in anyway, that sharing mechanism can be broken somehow.
Pick your poison.
(P.S. - The market made its choice long ago.)
Re: (Score:2)
Isn't it enough to be able to share memory between threads, rather than full processes, for most concurrent programming purposes?
I stipulated that no programs except for those written in the single high-level language would be permitted to run on the machine. And that language would be designed to only allow secure, in-bounds memory access, via use of a high-level memory model such as LISP uses.
So how would you write the exploit and get it to execute on the machine? You'd write it in the LISP equivalent lan
Why? (Score:3)
this component shares some of its cache with untrusted applications, including browsers
Why does the MMU need to give user space apps access to its cache? Isn't the O/S, firmware and microcode supposed to provide a logical view of hardware like memory to prevent this sort of abuse?
Re: (Score:2)
Re:Why? (Score:5, Informative)
x = a[n];
x = a[n+1];
You know you hit a cache miss on the second access, and the end of a cache line is right between a[n] and a[n+1]. Based on the offset from where the cache would normally be, you can figure out how big the ASLR padding is. Once you know the padding size, you can know exactly which address to jump to to when you inject your shell code (ie, your compiled assembly exploit).
There are other ways to defeat ASLR too, so I am not sure how useful this is, but the more techniques a hacker has, the better (from his perspective).
Re: (Score:2)
x = a[n]; //runs in .5microseconds
.
.
.
OK. So you know where your data sits in the cache.
you can know exactly which address to jump to
What sort of shit-tier system are we running that allows us to jump to a location in data? Oh, yeah. JavaScript.
Re: (Score:3)
It doesn't. A high-precision timer is used to execute a timing attack to infer what the cache contains; major browsers nixed their built-in javascript high-precision timers, but they managed to cobble together their own (from allowed javascript functions, presumably), which incidentally reintroduces old timing attacks like RowHammer. Browsers can fudge javascript timing, but the larger problem would remain. Presumably, microcode updates could fix this.
We should be able to browse without JavaScript (Score:2)
"Lesser known" (Score:3)
Article and/or citation are garbage (Score:2)
>, which is tasked with improving performance for cache management operations.
Stopped reading right there. These guys have no idea what they're talking about.
Re: (Score:2)
The simple version is, "that's not what a MMU does at all".
Bad reporting (Score:5, Informative)
This new Java script attack does *NOT* by itself compromise data, but simply allow a way to remotely extract the Address Space Layout Randomization that is currently employed by the OS. It does this by employing a javascript timer to measure page table walk times which are induced by executing javascript that accesses carefully selected offset in large objects (an earlier attempt to do this was frustrated by javascript implementations deliberately sabotaging the built-in high precision timer object). Once the specific ASLR pattern is determined for this specific boot of the kernel, other kernel vulnerabilities that involved direct access to aliased cache and/or memory locations that were mitigated by the kernel doing ASLR can now be modified to target the desired addresses on the target.
It's like knowing how to make key to break into a specific car, but if you use it on the wrong car, it triggers the car alarm and not knowing what car the key it works on. If you magically had a way to map the VIN to the car key, you could make a key that works for that car and steal the car. The car dealers have this mapping, so they can make a key for you, but what someone came up with a way to figure out the VIN->KEY mapping over the internet?
Re: (Score:1)
It has been responsibly disclosed for over 3 months through the Dutch CERT. Apple has already implemented some (undisclosed) fixes against the attack in their latest Safari update (which also acks the research group).
Given that this the typical academic research group is at most a couple of dozen people, one can imagine that such attacks are quite likely already in the hands of state agents. So the real question should be about the apathy of the majority of the vendors.
lesser known? (Score:2)
So the MMU is a lesser known part of the CPU these days? *sigh*
In other words (Score:2)
In other words, we've created CPUs with instruction set architectures so sophisticated that they can't be made safe from exploitation.
I may not understand the solution (if there is one), but I certainly admire the problem.
Re: (Score:2)
Could you summarise?