Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Wireless Networking Hardware Technology

Nonlinear Neural Nets Smooth Wi-Fi Packets 204

mindless4210 writes "Smart Packets, Inc has developed the Smart WiFi Algorithm, a packet sizing technology which can predict the near future of network conditions based on the recent past. The development was originally started to enable smooth real-time data delivery for applications such as streaming video, but when tested on 802.11b networks it was shown to increase data throughput by 100%. The technology can be applied at the application level, the operating system level, or at the firmware level."
This discussion has been archived. No new comments can be posted.

Nonlinear Neural Nets Smooth Wi-Fi Packets

Comments Filter:
  • by miguel_at_menino.com ( 89271 ) on Tuesday May 04, 2004 @08:20PM (#9059092)
    But does this improved network performance allow me to predict if I will get a first post based on my past inability to do so?
  • by Anonymous Coward on Tuesday May 04, 2004 @08:22PM (#9059106)
    Is anyone else slightly alarmed by this news? "Neural-net" technology that shows some degree of intelligence (if you consider making fuzzy predictions intelligence). I think that checks or governing circuits should be put in place for this kind of technology so that it doesn't get out of hand by, oh I don't know, burning out transmitter circuits or something. Remember the documentary "The Terminator"? Yeah. I do. I don't want something like that to happen.
  • by fjordboy ( 169716 ) on Tuesday May 04, 2004 @08:23PM (#9059119) Homepage
    When I see the headline: "Nonlinear Neural Nets Smooth Wi-Fi Packets" and I only understand the words nets, smooth and packets...and none of them in relation to each other... - I have to be a little concerned that my geekiness is dwindling...
    • by Hatta ( 162192 ) on Tuesday May 04, 2004 @08:32PM (#9059184) Journal
      I know those words, but that sign doesn't make any sense.
    • by pla ( 258480 ) on Tuesday May 04, 2004 @09:21PM (#9059533) Journal
      When I see the headline: "Nonlinear Neural Nets Smooth Wi-Fi Packets" and I only understand the words nets, smooth and packets...and none of them in relation to each other

      Simple 'nuff, really...

      Neural net - An arrangement of "dumb" processing nodes in a style mimicing that which the greybacks of AI (such as Minsky and Turing et al) once believed real biological neurons used. Basically, each node has a set of inputs and outputs. It sums all its inputs (each with a custom weight, the part of the algorithm you actually train), performs some very simple operation (such as hyperbolic tangent) called the "transfer function" on that sum, then sets all of its outputs to that value (which other neurons in turn use as their inputs).

      Nonlinear - This refers to the shape of the transfer function. A linear neural net can, at best, perform linear regression. You don't need a neural net to do that well (in fact, you can do it a LOT faster with just a single matrix inversion). So calling it "nonlinear" practically counts as redundant in any modern context.

      Smooth - A common signal processing task involves taking a noisy signal, and cleaning it up.

      Wi-Fi - An example of a fairly noisy signal that would benefit greatly from better prediction of the signal dynamics, and from better ability to clean the signal (those actually go together, believe it or not - In order to "clean" the signal without degrading it, you need to know roughly what it "should" look like).

      Packets - The unit in which "crisps" come. Without these, you can't use a Pringles can to boost the gain on your antenna to near-illegal values. ;-)

      There, all make sense now?
      • by Anonymous Coward
        I believe packet smoothing refers to taming rapid swings in packet output rates, so the network can adapt in a timely manner and thus drop fewer packets. In the OSI protocol layer model, I guess it's usually best accomplished in the protocol timers of the network and link layers. It would have nothing to do with receiver signal filtering, which is a physical layer process, performed before converting the received signal into packet data bits.
        • I believe packet smoothing refers to taming rapid swings in packet output rates

          Indeed it does... You couldn't really do much with the raw received signal at a higher level than the device's firmware, and you'd probably want it even lower than that (such as in the actual analog hardware).

          But that didn't fit as nicely into a pun involving Pringles.
  • by Anonymous Coward
    This technology can increase throughput 200-800% in networks of 3 Asian people and 1 doctor.
  • Hahaha (Score:3, Interesting)

    by Anonymous Coward on Tuesday May 04, 2004 @08:24PM (#9059128)
    This sounds like a scam. The CEO is a furniture salesman, the CTO was a consultant to DEC with an EE from the University of South Maine.
    • Re:Hahaha (Score:3, Insightful)

      by JessLeah ( 625838 ) *
      It seems that CEOs always end up in fields they have no experience in. Remember Sculley (sp.) of Apple shame? He was, what, a Pepsi or Coke exec?
    • Re:Hahaha (Score:3, Insightful)

      by Jeffrey Baker ( 6191 )
      There's lots of scam giveaways in this article. If the protocol "can be implemented" at the application layer, the network layer, or the MAC firmware, that means it *hasn't* been implemented in any of those places at all.
  • Damn... (Score:5, Funny)

    by AvoidTheNoid ( 772246 ) on Tuesday May 04, 2004 @08:26PM (#9059141)
    Words I undestood in the headline:

    1.Smooth

    Fuck...
  • by women ( 768472 ) on Tuesday May 04, 2004 @08:31PM (#9059176)
    I'm curious as to why they are using Neural Networks for this application? In the last 10 years or so, most machine learning applications have moved away from Neural Networks to more mathematically based models such as Support Vector Machines, a generative model (e.g. Naive Bayes), or some kind of Ensemble Method (e.g. Boosting). I suspect they used NN because the Matlab toolkit made it easy or someone in research hasn't kept up. I'd look for a paper to come out soon that improves the accuracy by using SVM.
    • by Anonymous Coward on Tuesday May 04, 2004 @08:43PM (#9059262)
      I'm curious as to why they are using Neural Networks for this application?

      Well, it's quite obviously because a Support Vector Machine is inherently linear, and to make it nonlinear, you must insert a nonlinear kernel which you need to select by hand.

      If you'd read the article, you'd see that they are using a recurrent-feedback neural network; good luck finding a recurrent-feedback nonlinear kernel for a SVM....! You can't just plug in a radial bias function and expect it to work. In this application, they are looking for fast, elastic response to rapidly changing conditions as well as a low tolerance for predictive errors--something an RFNN is ideal for, and that a SVM is absolutely terrible at.

      • by semafour ( 774396 ) on Tuesday May 04, 2004 @09:09PM (#9059434)

        Well, it's quite obviously because a Support Vector Machine is inherently linear, and to make it nonlinear, you must insert a nonlinear kernel which you need to select by hand.

        Not true [warf.org].

        "This invention provides a selection technique, which makes use of a fast Newton method, to produce a reduced set of input features for linear SVM classifiers or a reduced set of kernel functions for non-linear SVM classifiers."

      • by Anonymous Coward on Tuesday May 04, 2004 @09:14PM (#9059482)
        Actually the whole point of SVMs is that they can be used to model non-linear decision boundary. Contrary to the above post selecting the non-linear kernel isn't a big deal because the three common ones (polynomial, radial basis function and sigmoid ) generally produce similar classification results in most applications. Also SVMs are pretty damn fast to train and update since only the support vectors need to be remembered and changed. Just check the literature...

        I figure the real reasons they use NNs are much simpler. Firstly, its really easy to implement NNs that predict numeric values instead of classes and even more importantly they work. Research usually involves trying everything under the sun and reporting/patenting/exploiting whatever worked best.

        • FYI, sigmoid is not a kernel. It is not positive definite.
        • not really (Score:3, Interesting)

          by Trepidity ( 597 )
          The whole point of SVMs is that they can be used to model a linear decision boundary. They were developed to find a maximum-margin hyperplane separating positive and negative training instances, and the kernel methods to allow them to work on non-linear boundaries were a later addition.
          • Indeed, the SVM can be used to model a linear decision boundary (or, alternatively, do regression) in any feature space. The kernel has to comply to Mercer's theorem for most kernel machines, but not for all (e.g. not for the relevance vector machine).

            Later addition? Nonlinear kernels were already used even before the SVM was called the SVM. See here [psu.edu]. Perhaps you refer to all tutorials, which make it look like it was a later addition.
      • RFNN? Read the Fine Neural Network?
    • by zopu ( 558866 )

      I hear all too often from people in the field of machine learning who get their favourite solution (SVMs and NNs are the most common) and then they go hunting for a problem.

      It might not be exactly the best technique, but if at the time it was the easiest to understand and use, and gave really good results, then the right decision was made.

      Is that the difference between theory and practice right there?

    • by jarran ( 91204 )
      I expect at least part of the answer is that neural networks are trivial to understand and implement compared to support vector machines.

      You might be able to build SVM implementations relatively easily on a real computer using off the shelf libraries etc., I doubt many of these would run on a WiFi card.

      Neural nets have also been around for quite a while, so they have gained acceptance. Although SVMs have been known to the machine learning community for quite a while now, they have only just started being
    • pardon, but what is so unmathematical about neural networks? they are trying to predict a non-linear function, and ANN + backpropagation has been used for this kind of stuff for ages. plus, there are plenty of applications where ANNs are still used quite heavily.
    • Frankly, I'm surprised they're even using a non-linear filter. I bet there would be significant performance using simply LMS. I mean, you'd think that WiFi noise would be pretty white, no? Ah, according to the article they are using a Recursive Neural Network which are related to Echo State Machines, so perhaps they have a high dimensional imput space which would lead a linear filter to overfitting problems.

      As for not using SVMs, I seem to recall that they are better suited for classification than for para

    • I'm curious as to why they are using Neural Networks for this application? In the last 10 years or so, most machine learning applications have moved away from Neural Networks to more mathematically based models such as Support Vector Machines, a generative model (e.g. Naive Bayes), or some kind of Ensemble Method (e.g. Boosting).

      I challenge you to give a mathematical justification of why you think that support vector machines would be better in this application than neural networks. While SVM papers fill
  • stock market. ROI all the way.
  • Skeptic (Score:5, Insightful)

    by giampy ( 592646 ) on Tuesday May 04, 2004 @08:49PM (#9059298) Homepage
    Very often the term "neural network" is used
    just as a selling point because it sounds
    like something extremely advanced and "related
    to artificial intelligence".

    usually the neural network is just a
    very simple, possibly linear, adaptive filter
    which means that really contains no more
    than a few matrix multiplications ...

    yes it has some success in approximating
    things locally, but terms like "learning"
    are really misused

    After RTFA (the second) it actually
    seems that they did try two or three
    things before, but really i wouldn't
    "welcome our new intelligent packet sizers overlords"
    just yet.
    • Re:Skeptic (Score:5, Funny)

      by batkiwi ( 137781 ) on Tuesday May 04, 2004 @09:15PM (#9059486)
      Are you posting
      that from a mobile
      phone or do
      you just like to
      hit enter after
      every couple of
      words as some
      sort of nervous ti
      ck?
    • Re:Skeptic (Score:4, Insightful)

      by hawkstone ( 233083 ) on Tuesday May 04, 2004 @10:32PM (#9060027)
      usually the neural network is just a very simple, possibly linear, adaptive filter which means that really contains no more than a few matrix multiplications ...

      The simplicity of the calculation does not mean it is not a learning algorithm. Real neural networks are quite simple, as each "neuron" is simply a weighted average of the inputs passed through a sigmoid or step function. However, en masse they perform better than most other algorithms at handwriting recognition. They take a training set and operate on it repeatedly, updating their parameters, until some sort of convergence is reached. Their performance on a test set is a measure of how well they have learned. This is a learning algorithm.

      Even linear regression is a learning algorithm. You give it a bunch of training data as input (i.e. x,y pairs), iterate on that data until it converges, and is then used to predict new data. There happens to be an analytic solution to the iteration, but this does not make it any less of a learning algorithm.

      I think maybe your definition of "learning" is unnecessarily strict. The simplicity of the computation is not what defines this category of algorithms.

    • Re: Skeptic (Score:5, Informative)

      by Black Parrot ( 19622 ) on Wednesday May 05, 2004 @12:39AM (#9060697)


      > usually the neural network is just a very simple, possibly linear, adaptive filter which means that really contains no more than a few matrix multiplications ...

      No one in their right mind would use a linear ANN, since ANNs get their computational power from the nonlinearities introduced by their squashing functions. Without the nonlinearities, you'd just be doing linear algebra, e.g. multiplying vectors by matrices to get new vectors.

      As for the computational power of ANNs,

      • A simple feed-forward network with a single hidden layer can approximate any continuous function on the range [0,1] with arbitrary accuracy. (Or is it s/continuous/differentiable/ ? - can't remember.)
      • Certain architectures of recurrent ANNs are equivalent to Turing machines, if the weights are specified with rational numbers.
      • An ANN with real-valued weights (real, not fp) would be a trans-Turing device.
      Goggle a paper by Cybenko for the first result, Siegelmann and Sontag for the second, and Siegelmann (sans Sontag?) for the third third.

      > yes it has some success in approximating things locally, but terms like "learning" are really misused

      "Neural network" and "learning" are orthogonal concepts. A neural network is a model for computation, and learning is an algorithm.

      In practice we almost always use learning to train neural networks, since programming them for non-trivial tasks would be far to difficult.
  • by doc_brown ( 73383 ) on Tuesday May 04, 2004 @08:50PM (#9059309) Homepage
    To me, this sounds like (in the simplest form) that this is a variant on the Tit for Tat strategy that is usually applied to the Prisoner's Dilemma.

    • For those unaware, the Prisoner's Dilemma goes something like this (taken from wikipedia.org):

      "Two suspects are arrested by the police. The police have insufficient evidence for a conviction, and having separated them, visit each of them and offer the same deal: If you confess and your accomplice remains silent, he gets the full 10-year sentence and you go free. If he confesses and you remain silent, you get the full 10-year sentence and he goes free. If you both stay silent, all we can do is give you both
      • Back in the days when computers were large hulking monsters best kept under a desk, some college had a contest matching two computer programs playing the prisoner's dilemma game with roughly equivalent outcomes. A lot of famous computer scientists submitted programs, some many pages in length, but it turned out a really simple program won: Tit for Tat. The program begins with silence, but if it is betrayed, in the next round it will betray you, then switch back to silence. That's pretty much it. TGitH
  • Chartsengrafs (Score:5, Informative)

    by NanoWit ( 668838 ) on Tuesday May 04, 2004 @08:55PM (#9059334)
    Heres a graph that I ripped out of some lecture notes. It shows how much of a problem congestion is on 802.11b networks.

    http://web.ics.purdue.edu/~dphillip/802.11b.gif [purdue.edu]

    For a little explaination, where it says "Node 50" or "Node 100" that means that there are 50 or 100 computers on the wireless network. And the throughput numbers are for the whole network, not per host. So when 100 nodes are getting 3.5 Mbps that's .035 Mbps per host.

    Thanks to professor Park
  • Why wireless only (Score:5, Insightful)

    by Old Wolf ( 56093 ) on Tuesday May 04, 2004 @08:58PM (#9059351)
    Why isn't there something like this for normal internet? Even the "old days" of Zmodem's big packets if it was going well, and small packets if it wasn't, is better than the fixed MTU/MRU we're stuck with now.
    • Re:Why wireless only (Score:2, Informative)

      by Anonymous Coward
      Why isn't there something like this for normal internet? Even the "old days" of Zmodem's big packets if it was going well, and small packets if it wasn't, is better than the fixed MTU/MRU we're stuck with now.

      The normal internet has far less collisions & errors than wireless ethernet. And ethernet switches are now so cheap that it isn't worth your money to buy ethernet hubs.

      And the advantage here is that it is (allegedly) a successful predictive model of whether to use big or small packets, and not r
      • Re:Why wireless only (Score:2, Informative)

        by mesterha ( 110796 )

        And the advantage here is that it is (allegedly) a successful predictive model of whether to use big or small packets, and not reactive.

        If you react to errors, you can resend, but you've already wasted bandwidth. If you can avoid the error in the first place, it's much better! :)

        It predicts based on past performance therefore it is reacting. The savings on switching packet size is based on resending small packets instead of resending large packets. Losing a single small packet is not nearly as bad as

    • i would rather see TCP get updated with that college's improvements talked about a few weeks back... It made the window size scale much quicker to take advantage of broadband, rather than slowly, so as to saturate(or use to full potential) cable broadband more efficiently.
    • Because normal Internet doesn't need it.

      Ethernet can use CSMA/CD to deal with this kind of thing and hit around 97% throughput on wires. Wifi can't use the CD part (collision detect) as to transmit and receive at the same time requires very expensive equipment, so it can't tell if a collision has occurred while it's transmitting.

      The rest of the Internet is made up of either point-to-point links which won't have collisions as there's only two stations or other wired connections that can use CSMA/CD or simi
  • by Anonymous Coward on Tuesday May 04, 2004 @08:58PM (#9059353)
    It's a new way of determining the optimum packet size on the fly so that collisions, errors & retransmissions are minimized, greatly boosting overall throughput.

    QED
  • EE Times Article (Score:4, Insightful)

    by FreeHeel ( 620639 ) on Tuesday May 04, 2004 @09:12PM (#9059458)
    Wow...not 30 minutes ago I read this article [eetimes.com] in this week's EE Times on the same topic.

    This sounds like a great improvement to 802.11x technology...now let's open-source it so we can all benefit!

  • by identity0 ( 77976 ) on Tuesday May 04, 2004 @09:21PM (#9059527) Journal
    Gee, let's see how many buzzwords we can cram into a technology:

    "Introducing iFluff/XP: An XML-based Object-oriented neural networking system that will synergize the modular components of your SO/HO WAN protocols, while minimizing TCO and giving five 9's reliability by branch-predicting streaming traffic through your SAN, NAS, or ASS.

    iFluff/XP allows you to commoditize and monetize the super-size networkcide as rogue packets from black hats and white hats and clue bats compete for cyber-mindshare of your Red Hat hosts.

    Secure your Homeland LAN and manage your digital rights with dignitude and affordability with the help of iFluff/XP's bytecode-based embedded operating system protocols interfacing through broadband Wi-Fi connectivity and virtual presense frameworks.

    A user-friendly GUI is provided through an XSLT module interfacing to leading industry applications such as Mozilla, .NET, Java 2 USS Enterprise Edition, and GNU/Emacs - soon to include POP, IMAP, P2P and B2B functionality for enhanced productivity.

    When you're thinking of buzzword-compliiant, ISO9001 conformant, remotely-managed turnkey security solutions, remember iFluff.... TO THE XXXTREME!"

    Oh god, my brain hurts now.
  • Does this need to be on both ends of the connection or on just one end?
    • Re:which end? (Score:3, Interesting)

      by Kris_J ( 10111 ) *
      Interesting question. I would assume that any given existing 802.11b adapter can receive packets of any size, given that there's a protocol to packets that lets the receiver know how big they should be or when they've finished. Thus you could just deploy a new access point and get a boost from it to the computers. Similarly, you could install a new NIC in a particular PC and boost the transfer rate from it to the access point. For benefits in both directions you'll have to upgrade both ends.
      • Thus you could just deploy a new access point and get a boost from it to the computers. Similarly, you could install a new NIC in a particular PC and boost the transfer rate from it to the access point. For benefits in both directions you'll have to upgrade both ends.

        According to the article, you can reap the benefits through a simple firmware upgrade (or even through an application). Since I don't know how the 802.11b standard works, I can't comment on whether you would need to upgrade firmware on both

        • Indeed. I hadn't got to the firmware/software option by the time I made my original reply. Now I just need Netgear to issue firmware upgrades for all the 802.11b stuff I have of theirs.

          Standing back a bit, this is pretty shiney. A "simple" firmware update has the potential to double the throughput of existing equipment, so long as there's a bit of spare processing power available. I love anything that can use spare CPU time to double storage or throughput, especially in embedded systems.

      • Sounds reasonable. I wonder if they could work this into the Linux kernel. I don't know if it's highly specific code or if it's something anyone could code now that they have a clue to try. Any reason this wouldn't benefit all the networking code, not just wireless? Such that if it's implemented in firmware then it can be left to the hardware, else the kernel kicks in support itself.
        • Any reason this wouldn't benefit all the networking code, not just wireless?
          This new technique is a way of dealing with errors as efficiently as possible. A wireless NIC on the edge of the access point's range sees a lot of noise, while few cabled connections get anywhere near the run length limit of a Cat5 cable. I would assume it would also be of limited benefit to a strong wireless connection with few if any errors and no congestion.
    • by wed128 ( 722152 )
      if it is packet-sizing, i suppose it would only have to be on the transmitting end...so you'd basically double your upstream...and if the AP or router has it implemented for ITS upstream, then you get your full speed boost.
  • Patents on the technology can be filed at the US Patent Office, the WTO, or at skynet
  • I don't see why they need a complex neural net. They could streamline 90% of WiFi traffic by assuming that the browsing will consist of downloading .jpgs and small .mpgs off TGP sites.

  • LOL (Score:5, Funny)

    by pyth ( 87680 ) on Tuesday May 04, 2004 @09:35PM (#9059603)
    Saying "Non-linear neural network" is like saying "Non-purple hamster". I mean, how often do you see a linear NN? Like, never.
  • For sure?

    I worked in the signal processing / neural net area a while ago, and it wasn't ready for prime time, then.

    Does anyone know for sure if there are commercially viable AI's making money on stock market technical analysis (TA) yet?

  • Looks sketchy to me (Score:2, Interesting)

    by marksthrak ( 61191 )
    Now, I'm an AI guy, not a networking expert, but some of this seems sketchy. Their website says:

    Because all data packets that are now being sent across packet-switched networks are in fixed-size data packets, SmartPackets' "variable-sizing" packet technology can positively impact the performance of a very wide range of technologies, applications and protocols.

    I'm pretty sure that's not the case. Besides, if the technology you're pushing boils down to 'variable-sizing', seems like someone's thought of

    • There are probably 3 main types of traffic

      - general slow stuff like telnet
      - general high utility stuff like ftp or http file transfer
      - bursted traffic like web surfing or email checking

      It seems highly unlikely that you'd need a neural net to optimize the packet size for these different types.

      I also don't really understand why that makes it go so much faster. I'm sure you can conduct 'corner-case' tests where it makes a difference - but on the whole i can get file transfers to run at pretty near the lines
      • No, that's not what this is about.

        The problem to solve is that depending on the amount of traffic and noise the optimal packet length to send varies. If you send a long packet and fail you have to resend all that data, even the data which you managed to get through.

        If instead you send two packets then only the packet with the error would have to be resent. However smaller packets mean larger overhead from the protocols used. This overhead to packet length ratio is optimised depending on the level of noise
    • by Hast ( 24833 )
      I'd recommend that you read the second article, from EE-times. It actually has some content which is something their own site is quite completely void of.

      And people have been doing this before. The EE-times article mentions that. Apparently no-one has either not made so much progress or just not made so much of a fuss over it before. A quick search for "variable packet length and wireless" turns up quite a lot of results though. I'm fairly confident that you can find previous research in this area if you l

Every successful person has had failures but repeated failure is no guarantee of eventual success.

Working...