Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming Technology Science

Genome Methods Applied to Reverse-Engineering 94

L1TH10N writes "Wired news has an article on a truely innovative way of analysing network protocol reverse-engineering. Marshall Beddoe, a security analyst, is using algorithms used in bioinformatics to analyse closed-source and secret network protocols which he calls "Protocol Informatics".According to Beddoe, network conversations are full of "junk" -- usually the actual data being sent -- which interferes with the analysis of the occasional command sequence that controls what to do with that junk. This has parrallels with Bioinformatics that has to deal with a similar problem of finding known DNA sequences separated by long gaps of unknown data. Biologists have devised complex algorithms to discover whether DNA sequences are descended from the same ancestors by comparing the genetic differences with the known mutation rates of certain DNA components. Beddoe applied the same principles to mutating network conversations of evolving network protocols."
This discussion has been archived. No new comments can be posted.

Genome Methods Applied to Reverse-Engineering

Comments Filter:
  • by mirko ( 198274 ) on Tuesday October 05, 2004 @10:38AM (#10440019) Journal
    I guess we are on our way to finding global laws for everything :)
    • I'm not sure I see anything to do with 'laws' in this. It does look like a novel approach and I applaud the kind of lateral thinking that caused someone to apply an algorithmic method to the task that was devoloped for something in such a (seemingly) different field.

      I firmly believe that bioinformatics is going to the the next IT. Programmers will use compilers that create genetic sequences for bio-machines and bio-computers (the debugging process is the main scary part). The odd contrast to present IT
    • Actually, theres probably a lot more global laws out there than we currently know...

      Im sure you have all read about swarm style ai taken from nature. Google boids if you haven't. Its a relatively good model. It isnt too hard to imagine emergent behaviour used in future technological issues... Certainly copying nature is a valid method of problem solving. No point trying to reinvent what already works.

  • by Tuxedo Jack ( 648130 ) on Tuesday October 05, 2004 @10:43AM (#10440088) Homepage
    If we could find a way to apply said algorithms to spam at the gateway level.

    If that could be implemented somehow (an attached appliance or something), it could drastically cut the amount of spam that goes through.
  • by Anonymous Coward
    reverse-engineering methods applied to genome
  • by tilleyrw ( 56427 ) on Tuesday October 05, 2004 @10:45AM (#10440121)

    Perhaps these techniques can be applied to the never-ending task of creating an accurate converter for MS Word .doc-uments?

    Yes, simple document conversion is possible but until 100% accuracy is possible the race is not won.

    • err.. what am I supposed to do with a file containing an end of file mark, thinking that's all that will be left once all the meaningless junk is filtered out?
    • Well, the only good news is that Microsoft isn't able to reach accuracy 100% themselves, whether it involves exchange of Word documents between PCs, or between Macs and PCs.

      Bert
      Who started his own company and now understands first hand what his former secretary had to endure when battling with that productivity killer. We need competition to get rid of it. Any measure against Microsoft should involve opening the standard.
    • I tend to use the person who sent me the file as a converter, but for reading files in cases where that's not possible I use fastpdf.com, which I think scripts Word to do the conversion.
  • Modeling (Score:3, Insightful)

    by KingKire64 ( 321470 ) on Tuesday October 05, 2004 @10:46AM (#10440130) Homepage Journal
    The Human Brain... the most complex and amazing computer ever built. The more we learn about it and how it works the more we can apply to computers. Imagine the computational power of the mind put to something specific.

    I dont know what im talking about... but its cool anyway.
  • So... (Score:2, Funny)

    by Anonymous Coward
    Microsoft will finally be able to figure out what is happening in their own network protocols!
  • by kyhwana ( 18093 ) <kyhwana@SELL-YOUR-SOUL.kyhwana.org> on Tuesday October 05, 2004 @10:48AM (#10440152) Homepage
    Of course, this is illegal in the US. No reverse engineering allowed
  • by museumpeace ( 735109 ) on Tuesday October 05, 2004 @10:49AM (#10440154) Journal
    A Sciencedaily.com article [sciencedaily.com] recaps a news release about U of Toronto researchers, David Lie [utoronto.ca] and Ashvin Goel [utoronto.ca], who are at work on [as in they do not have a finished tool or product to announce] on software that not only detects intrusions but backtracks to the sources and cleans up the damage. The article hints
    These naive hackers also leave clues. Although they use IP (Internet protocol) addresses to bounce from machine to machine, hackers pick up languages used on interfaces along the way, leaving a trail of breadcrumbs that trace back to the point of origin.
    that the native human language of the locale where each in the chain of nodes used for an attack creeps into the evidence/clues. I wonder what they are talking about?
    • that the native human language of the locale where each in the chain of nodes used for an attack creeps into the evidence/clues. I wonder what they are talking about?

      You mean like when someone defaces a webpage with "Roight! USA eats chunder! AUSSIES RUL3!!1!one!1!" they can figure out that the perp is (obviously) Canadian?
  • by w.p.richardson ( 218394 ) on Tuesday October 05, 2004 @10:52AM (#10440196) Homepage
    "Junk" in the datastream is useful (since we have made it, we use the control codes to reassemble).

    "Junk" in DNA (e.g., "latent" DNA) is probably not junk, we just don't know the function (yet). No scientist worth their salt would admit that (at least not in earshot of a grant proposal review committee!)

    • > Junk" in DNA (e.g., "latent" DNA) is probably not
      > junk

      Actually theres an article in this months SciAm that talks exactly about this. Very interesting

      http://sciam.com/article.cfm?chanID=sa006&colID=1& articleID=00045BB6-5D49-1150-902F83414B7F4945 [sciam.com]

      • Actually theres an article in this months SciAm that talks exactly about this.

        Exactly? The article you've linked to (what I can see of it; I'm not a subscriber) appears to be about RNA's role in the regulation of genes.

        There's nothing about "Junk DNA", although I know introns play a role in the regulation of a genes transclation. Nobody calls the DNA in those regions "junk" DNA, though.

        Having not been able to read the full article, however, I may have missed some important link into the "junk" DNA to
    • It's amusing that right now I'm investigating intronic DNA, and looking for signals of selection. A few percent of the genome is conserved in non-gene regions between humans and mice (for example.) Why would the DNA be conserved (against a backround mutation rate), unless it was important.

      I can't think of many scientists who think about "junk" DNA anymore...but if I ever get my research finished and published, then I'll add one more nail to the coffin.
    • "Junk" in DNA (e.g., "latent" DNA) is probably not junk, we just don't know the function (yet). No scientist worth their salt would admit that (at least not in earshot of a grant proposal review committee!)

      From what I've read there is a case that there is real Junk in the DNA. Various sequences which at some point in the past served a purpose but now (like the human apendix) the original function is no longer relavant. I've also read somewhere that some of the DNA is actually a sort of virus which eons ag

      • Junk DNA acts as a protective buffer against genetic damage and harmful mutations. An overwhelming percentage of DNA is irrelevant to the metabolic and developmental processes, so it is unlikely any single, random insult to the nucleotide sequence will affect the organism.

        I read something about this in NewScientist a while ago. Blocks of a certain base (guanine?) either side of important regions of DNA, which are more susceptible to damage (by free radicals?), serve to protect the important code, by be
    • Junk DNA is just part of the data segment.

      If your disassembling the code of a program, the data is just junk that gets in the way until you figure out what the code is doing. Of course the ascii comments in data may be useful and from what I can tell, DNA doesn't seem to have any text strings in it so for now its just junk.

      I haven't looked into the pattern matching stuff the bio guys are using but its very handy to be able to take a bit of a program and find out where the common libraries functions are h
  • by Sheepdot ( 211478 ) on Tuesday October 05, 2004 @10:53AM (#10440204) Journal
    That'll come as a relief to Beddoe, who until now assumed that biologists wouldn't pay much heed to his project.

    "They're working on uncovering the mysteries of life itself; we're just hacking network protocols," he said. "Which sounds more important to you?"


    I don't think Beddoe should cheapen the reverse engineering aspects of networking compared to biology. We may still be years away from finding a cure to cancer, AIDs, etc. and there's a good chance that biology work in this area might not be as fruitful. After all, (without getting into a religious debate, here) man was not created by man, whereas network protocols are. Because of this, it is relatively easier for us to reverse-engineer something that was created by another human, because we know how they think. Evolution or creation, we don't know much about our own building blocks, because we don't know how either God thinks, or the universe fully works.

    While his software is great for "hacking network protocols", the biologists paying attention to his work might not find what they are looking for. The inputs very well may be just too vast for his ideas to provide any help.

    On the other hand, the Samba team and the Spam Assasin author will most likely enjoy this.
  • Not an apt analogy (Score:2, Insightful)

    by galt2112 ( 648234 )
    I think that network protocols are not similar to unmapped genome sequences in that network traffic is metadata and data.

    Genome sequences are much more consistent. It's all data, processed by RNA computers.
  • by Anonymous Coward
    I'd just grep the stream and be done with it.

  • true+ly = ? (Score:2, Informative)

    by kamagurka ( 606506 )
    it's "truly", damn it! TRULY!
  • Gary Larson has previously documented this phenomenon: http://home.earthlink.net/~grleone/funny/farside/g inger.gif [earthlink.net]
  • by jnull ( 639971 ) on Tuesday October 05, 2004 @11:00AM (#10440301)
    I always enjoy such articles.... Technology tranfer has been the cornerstone of innovation for how long? Companies study other industries in order to bring innovation to tired processes and technologies. It is responsible for many of today's disruptive technological achievement. Was it South West Airlines who did formal research on pit crews at Daytona (or something like that)? Regardless, keep up the good work... who knows the next great step in reverse engineering might come from examining how Vegas tears down their casino's, or is that just what I'm thinking for Windows. "It is a miracle that curiosity survives formal education." --Albert Einstein --j
  • by medication ( 91890 ) on Tuesday October 05, 2004 @11:01AM (#10440306) Journal
    quote:
    "The problem of decoding the language of networks and the problem of finding signals in DNA are really two related instances of machine learning problems. We're almost bound to discover universal principles of information communication by investigating both," - Terry Gaasterland
    This seems like a pretty obvious conclusion after reading the article but I'm curious why there aren't any reference's to pure informatics studies. Is there such a thing? After initial googling I'm only seeing bio-informatics results. Anyone have any insights as to what I should be looking for to find research/papers/studies on pure informatics or "universal principles of information communication".
  • Prolog configured to huge stacks does the job with a very little code actually writen. If you are sufficiently patient.
  • Didn't realize the human Genome could be used as a hammer...
  • I've heard of race conditions in computer science, but this goes way too far.

    Seriously, how much would a Big Red Button have cost?

  • DNA vs. DMCA (Score:1, Flamebait)

    by Doc Ruby ( 173196 )
    You can't reverse engineer the genome: some of the genes are patented! Nevermind the prior art in your mom's nuclei, they literally own your ass - you've just got a limited license to use it. When they release the retrovirus with the broadcast flag flipped on, finally every Slashdotter's dream of "baby licenses" will be possible.
  • Bioinformatics links (Score:5, Informative)

    by mattr ( 78516 ) <<mattr> <at> <telebody.com>> on Tuesday October 05, 2004 @11:30AM (#10440800) Homepage Journal
    Yesterday wrapped up over a week of intense Bioinformatics seminars, poster sessions, exhibitions, and brainbusting studying at Bio Japan in Tokyo and related links. I just saw a presentation on the H-Invitational [www.hinv.jp] database [aist.go.jp] which though in Japan also combines the content of foreign databases. It is extremely impressive, and they combine lots of online calculators and results visualizers that are really impressive.

    Also figuring out biology seems to be a lot harder than figuring out networking, at least there are all kinds of nefarious things but also serendipitous things found. Like one presentation I just heard had a U.S. scientist who announced that they had discovered an entire signalling network in human cells that was like the one found in yeast cells. And apparently more proteins can be encoded than the number of genes, because of alternate orderings (counting from different displacements in the gene, I think, ask a real bioinformatics expert). One talk I heard a year ago that stuck with me was a scientist who had devised a way to find signalling pathways in cells quickly; by forcing the cell to die if certain requirements were not met, he created a parallel computer that allowed him to discover a whole swath at once. There is also a lot of math and statistics, as well as a lot of biological knowledge behind it, it is not strange to see various statistical tests, references to different computer programs they used for analysis, or a mention of simulated annealing (well maybe that one not so often, came up yesterday though).

    One interesting thing is that they (the H-Invitational people / Japan Bioinformatics Consortium) have I believe twice held what they call annotation jamborees, much like a hackfest! In 2002 they had 120 scientists gather (mostly Japan but from all over the world) in a big room with a computer per person. They locked them in for 10 days, and annotated IIRC over 20,000 genes, basically doing a figure some man years of work in a week, inputting data so it can be searched, analyzed, and crossreferenced.

    They do have a comparison between mouse and human genome there, I wonder if something similar could be done in open source in terms of annotating and indexing a libary of open source code in different languages, really all in one pseudo language would be more useful perhaps. Anyway biologists are learning from computer scientists learning from mathematicians, and someone famous has said that in the future, all science will be computer science.

    Bioinformatics people are doing text mining and data mining, but also there are many flavors and types of analysis programs designed to penetrate and match up information as encoded by tiny molecules, folded proteins, genes, and so on. Here are some links to get started. Also note the perl for bioinformatics books, and there was a big oreilly bioinformatics conference archived from 2003 and other links too (see bio.oreilly.org link below).

    I cannot speak for everyone, but I can convey what I have heard, that there have long been communication gaps that have held back some of this, actually cultural differences. For example physicists like pure math and biologists deal in dirty, wet things.. when people successfully combine different perspectives in this area [more] discoveries start getting made. In Japan at least they are trying to figure out how to grow more bioinformaticists, since students tend to go only towards either biology or towards computer science (why study twice as hard). But there seems to be a lot of interesting stuff in there for both sides.

    PLoS Bio article [plosbiology.org]
    some clusty [clusty.com]
    faq [bioinformatics.org]

    • by Anonymous Coward on Tuesday October 05, 2004 @12:34PM (#10441757)
      And apparently more proteins can be encoded than the number of genes, because of alternate orderings (counting from different displacements in the gene, I think, ask a real bioinformatics expert).
      Actually, the increase in number of genes compared to actual encoded genes as you move up the "eukaryotic evolutionary chain" is due to the organisms finding new and novel ways to combine the same protiens.. not in different displacements of the same gene. See Nature paper on draft human genome analysis: Nature. 2001 Feb 15;409(6822):860-921 Also the draft Mouse genome analysis: Nature. 2002 Dec 5;420(6915):520-62
      • the two are not mutually exclusive -- alternative splicing (combining different pieces of the same gene to make different proteins) is well established as a means of getting multiple outputs from the ''same'' input (eg. a string of DNA). see for example <a href="http://www.exonhit.com/alternativesplicing/ " >this website on alternative splicing</a> (didn't expect THAT when I typed it in to Google...).

        Note that the cassette model of alternative splicing is not mutually exclusive with the 'diff
  • by jaxon6 ( 104115 ) on Tuesday October 05, 2004 @01:10PM (#10442290)
    I work right in the middle of all that is biology at MIT(Center for Cancer Research, Biology, BioInformatics, Chemistry, Biological Engineering, Brain and Cog, Mathematics, Physics, Computer Science, etc..) and the geeks in each department are aware of the advancements made in other departments and how they can help themselves. In fact, MIT created something called CSBi, the Computational and Systems Biology Initiative(csbi.mit.edu), which has professors and students from all the departments listed above, and more. They collaborate, share students and projects, organize retreats and conferences. There's even a degree program in systems biology.

    The majority of study is computer research applied towards biological methods and models, but I'm sure some of the cs geeks will be reading this article and grab the work done by the bio geeks.

    And in the end, we will all have the best mouse trap ever.
  • For those "evolving" protocols...
    http://www.ietf.org/rfc.html
  • I think this is pretty damn cool, but not any more interesting than some of the other crossover techniques that have come out recently. One idea was to mimic the way ants find food and communicate to the colony where that is. Simulated ants with simulated pheromones were used to find a decent solution to the traveling salesman problem, where the salesman wants to hit each of a list of cities in the shortest possible route, without backtracking.

    There's a pdf here [unicaen.fr] on the subject or you could read the go

  • So... I did this with intrusion detection (masquerade detective actually) about a year and a half ago. Just FYI ...

    http://www.acsac.org/2003/beststud.html [acsac.org]
  • So he can use a binary file to create a tree of realted bits, but suppose that he has access to the compiler, how does he get from this tree to a description of which source code leads to which binary code?

    I guess he should write a script to create a huge amount of very similar programs, and compile them all to create binary trees. Are there standard methods for analyzing such a data set? Is it just simple multivariate statistics?

  • Sounds exciting - applying one science onto another - I think this is the basic foundation on which Science builds itself up -isn't it!
  • is our genome protected under the DMCA or is that around the corner? Hope I didn't give them any ideas....

"I'm a mean green mother from outer space" -- Audrey II, The Little Shop of Horrors

Working...