MojoKid writes "Intel announced a set of new enterprise products today aimed at furthering its strengths in the TOP500 supercomputing market. As of today, the Chinese Tiahne-2 supercomputer (aka Milky Way 2) is now the fastest supercomputer on the planet at roughly ~54PFLOPs. Intel is putting its own major push behind heterogeneous computing with the Tianhe-2. Each node contains two Ivy Bridge sockets and three Xeon Phi cards. Each node, therefore, contains 422.4GFLOP/s in Ivy Bridge performance — but 3.43TFLOPs/s worth of Xeon Phi. In addition, we'll see new Xeons based on this technology later this year, in the 22nm E5-2600 V2 family, with up to 12 cores. The new chips will be built on Ivy Bridge technology and will offer up to 12 cores / 24 threads. The new Xeons, however, aren't really the interesting part of the story. Today, Intel is adding cards to the current Xeon Phi lineup — the 7120P, 3120P, 3120A, and 5120D. The 3120P and 3120A are the same card — the 'P' is passively cooled, while the "A" integrates a fan. Both of these solutions have 57 CPUs and 6GB of RAM. Intel states that they offer ~1TFLOP of performance, which puts them on par with the 5110P that launched last year, but with slightly less memory and presumably a lower price point. At the top of the line, Intel is introducing the 7120P and 7120X — the 7120P comes with an integrated heat spreader, the 7120X doesn't. Clock speeds are higher on this card, it has 61 cores instead of 60, 16GB of GDDR5, and 352GBps of memory bandwidth. Customers who need lots of cores and not much RAM can opt for one of the cheaper 3100 cards, while the 7100 family allows for much greater data sets."
Follow Slashdot stories on Twitter
An anonymous reader writes "China's Tianhe-2 is the world's fastest supercomputer, according to the latest semiannual Top 500 list of the 500 most powerful computer systems in the world. Developed by China's National University of Defense Technology, the system appeared two years ahead of schedule and will be deployed at the National Supercomputer Center in Guangzho, China, before the end of the year."
aarondubrow writes "Researchers recently created OpenfMRI, a web-based, supercomputer-powered tool that makes it easier for researchers to process, share, compare and rapidly analyze fMRI brain scans from many different studies. Applying supercomputing to the fMRI analysis allows researchers to conduct larger studies, test more hypotheses, and accommodate the growing spatial and time resolution of brain scans. The ultimate goal is to collect enough brain data to develop a bottom-up understanding of brain function."
aarondubrow writes "For more than 50 years, linguists and computer scientists have tried to get computers to understand human language by programming semantics as software, with mixed results. Enabled by supercomputers at the Texas Advanced Computing Center, University of Texas researchers are using new methods to more accurately represent language so computers can interpret it. Recently, they were awarded a grant from DARPA to combine distributional representation of word meanings with Markov logic networks to better capture the human understanding of language."
An anonymous reader writes "With help from a draft report (PDF) from Oak Ridge National Laboratory's Jack Dongarra, who also spearheads the process of verifying the top of the pack supercomputer, we get a detailed look at China's Tianhe-2 system. As noted previously, the system will be housed at the National Supercomputer Center in Guangzhou and has been aimed at providing an open platform for research and education and to provide a high performance computing service for southern China. From Jack's details: '... was sent results showing a run of HPL benchmark using 14,336 nodes, that run was made using 50 GB of the memory of each node and achieved 30.65 petaflops out of a theoretical peak of 49.19 petaflops, or an efficiency of 62.3% of theoretical peak performance taking a little over 5 hours to complete.The fastest result shown was using 90% of the machine. They are expecting to make improvements and increase the number of nodes used in the test.'"
Nerval's Lobster writes "The Texas Advanced Computing Center (TACC) at The University of Texas at Austin is going to get a major speed boost this summer, and it won't come from new CPUs. Internet2, the research project that acts as a test bed for new Internet technologies, will take TACC's massive computing system from 10GB to 100GB of Ethernet throughput. TACC supercomputers are regularly found near the top of the Top 500 supercomputer list, which ranks the world's fastest supercomputers. But while the supercomputers were fast, the connectivity wasn't quite up to snuff. So TACC began the emigration to the Internet2 network. TACC is a key partner in the UT Research Cyberinfrastructure, which provides a combination of advanced computing, high-bandwidth network connectivity, and large data storage to all 15 of the UT system schools. So not only is TACC upgraded to Internet2s 100GB and 8.8 terabit-per-second optical network, platform, services and technologies, so is the entire UT system. 'This Internet2 bandwidth upgrade will enable researchers to achieve a tenfold increase in moving data to/from TACC's supercomputing, visualization and data storage systems, greatly increasing their productivity and their ability to make new discoveries,' TACC director Jay Boisseau wrote in a statement."
An anonymous reader writes "Commodity ARM CPUs are poised to to replace x86 CPUs in modern supercomputers just as commodity x86 CPUs replaced vector CPUs in early supercomputers. An analysis by the EU Mountblanc Project (PDF) (using Nvidia Tegra 2/3, Samsung Exynos 5 & Intel Core i7 CPUs) highlights the suitability and energy efficiency of ARM-based solutions. They finish off by saying, 'Current limitations [are] due to target market condition — not real technological challenges. ... A whole set of ARM server chips is coming — solving most of the limitations identified.'"
gbrumfiel writes "Last week, Google and NASA announced a partnership to buy a new quantum computer from Canadian firm D-Wave Systems. But NPR news reports that many scientists are still questioning whether new machine really is quantum. Long-time critic and computer scientist Scott Aaronson has a long post detailing the current state of affairs. At issue is whether the 512 quantum bits at the processor's core are 'entangled' together. Measuring that entanglement directly destroys it, so D-Wave has had a hard time convincing skeptics. As with all things quantum mechanical, the devil is in the details. Still it may not matter: D-Wave's machine appears to be far faster at solving certain kinds of problems (PDF), regardless of how it works."
riverat1 writes "After being embarrassed when the Europeans did a better job forecasting Sandy than the National Weather Service Congress allocated $25 million ($23.7 after sequestration) in the Sandy relief bill for upgrades to forecasting and supercomputer resources. The NWS announced that their main forecasting computer will be upgraded from the current 213 TeraFlops to 2,600 TFlops by fiscal year 2015, over a twelve-fold increase. The upgrade is expected to increase the horizontal grid scale by a factor of 3 allowing more precise forecasting of local features of weather. The some of the allocated funds will also be used to hire some contract scientists to improve the forecast model physics and enhance the collection and assimilation of data."
ananyo writes "D-Wave, the small company that sells the world's only commercial quantum computer, has just bagged an impressive new customer: a collaboration between Google, NASA and the non-profit Universities Space Research Association. The three organizations have joined forces to install a D-Wave Two, the computer company's latest model, in a facility launched by the collaboration — the Quantum Artificial Intelligence Lab at NASA's Ames Research Center. The lab will explore areas such as machine learning — useful for functions such as language translation, image searches and voice-command recognition. The Google-led collaboration is only the second customer to buy computer from D-Wave — Lockheed Martin was the first."
An anonymous reader sends this excerpt from Wired: "[Henry] Markram was proposing a project that has bedeviled AI researchers for decades, that most had presumed was impossible. He wanted to build a working mind from the ground up. ... The self-assured scientist claims that the only thing preventing scientists from understanding the human brain in its entirety — from the molecular level all the way to the mystery of consciousness — is a lack of ambition. If only neuroscience would follow his lead, he insists, his Human Brain Project could simulate the functions of all 86 billion neurons in the human brain, and the 100 trillion connections that link them. And once that's done, once you've built a plug-and-play brain, anything is possible. You could take it apart to figure out the causes of brain diseases. You could rig it to robotics and develop a whole new range of intelligent technologies. You could strap on a pair of virtual reality glasses and experience a brain other than your own."
anzha writes "Horst Simon, Deputy Director of Lawrence Berkeley National Laboratory, has stood up at conferences of late and said the unthinkable: supercomputing is hitting a wall and will not build an exaFLOPS HPC system by 2020. This is defined as one that passes linpack with a performance of one exaFLOPS sustained or better. He's even placed money on it. You can read the original presentation here."
Nerval's Lobster writes "Japan has thrown its hat into the ring for exascale computing, reported the country's newspapers. The goal: achieve one exaFLOPS of performance by 2020. Japan's finance ministry has agreed to begin work next fiscal year on a supercomputer with a performance capability 100 times that of the K computer, a 10-petaFLOPS computer that debuted as the most powerful supercomputer in the world in 2011. The midterm report for the new supercomputer was concluded Thursday, the Asahi Shimbun business daily reported. The Japan Times was slightly more conservative, reporting that the Education, Culture, Sports, Science and Technology Ministry will seek funding to design the new machine in its fiscal 2014 budget request — implying that the project has not necessarily been approved. The science ministry is hoping to keep the cost of the new supercomputer below the ¥110 billion mark ($1.08 billion) that was required to develop the K computer, the paper reported. (Slashdot couldn't find any evidence that the project had been approved on the ministry Webpage, although the K computer was mentioned several times in a discussion of public-private partnerships.)"
Nerval's Lobster writes "The 'Sequoia' Blue Gene/Q supercomputer at the Lawrence Livermore National Laboratory (LLNL) has topped a new HPC record, helped along by a new 'Time Warp' protocol and benchmark that detects parallelism and automatically improves performance as the system scales out to more cores. Scientists at the Rensselaer Polytechnic Institute and LLNL said Sequoia topped 504 billion events per second, breaking the previous record of 12.2 billion events per second set in 2009. The scientists believe that such performance enables them to reach so-called "planetary"-scale calculations, enough to factor in all 7 billion people in the world, or the billions of hosts found on the Internet. 'We are reaching an interesting transition point where our simulation capability is limited more by our ability to develop, maintain, and validate models of complex systems than by our ability to execute them in a timely manner,' Chris Carothers, director of the Computational Center for Nanotechnology Innovations at RPI, wrote in a statement."
Lank writes "A team of computer scientists from Lawrence Livermore National Laboratory and Rensselaer Polytechnic Institute have managed to coordinate nearly 2 million cores to achieve a blistering 504 billion events per second, over 40 times faster than the previous record. This result was achieved on Sequoia, a 120-rack IBM Blue Gene/Q normally used to run classified nuclear simulations. Note: I am a co-author of the coming paper to appear in PADS 2013."
Indiana University has replaced their supercomputer, Big Red, with a new system predictably named Big Red II. At the dedication HPC scientist Paul Messina said: "It's important that this is a university-owned resource. ... Here you have the opportunity to have your own faculty, staff and students get access with very little difficulty to this wonderful resource." From the article: "Big Red II is a Cray-built machine, which uses both GPU-enabled and standard CPU compute nodes to deliver a petaflop -- or 1 quadrillion floating-point operations per second -- of max performance. Each of the 344 CPU nodes uses two 16-core AMD Abu Dhabi processors, while the 676 GPU nodes use one 16-core AMD Interlagos and one NVIDIA Kepler K20."
Lucas123 writes "In June, Harvard's Clean Energy Project plans to release to solar power developers a list of the top 20,000 organic compounds, any one of which could be used to make cheap, printable photovoltaic cells (PVCs). The CEP uses the computing resources of IBM's World Community Grid for the computational chemistry to find the best molecules for organic photovoltaics culled the list from about 7 million. About 6,000 computers are part of the project at any one time. If successful, the crowdsourcing-style project, which has been crunching data for the past two-plus years, could lead to PVCs that cost about as much as paint to cover a one-meter square wall." The big thing here is that they've discovered a lot of organic molecules that have the potential for 10% or better conversion; roughly equivalent to the current best PV material, and twice as efficient as other available organic PV materials.
Nerval's Lobster writes "T-Platforms, which manufactured the fastest supercomputer in Russia (and twenty-sixth fastest in the world), has been placed on the IT equivalent of the no-fly list. In March, the U.S. Department of Commerce's Bureau of Industry and Security added T-Platforms' businesses in Germany, Russia and Taiwan to the 'Entity List,' which includes those believed to be acting contrary to the national security or foreign policy interests of the United States. U.S. IT companies are essentially banned from doing business with T-Platforms, especially with regards to HPC hardware such as microprocessors, which could be used for what the government views as illegal purposes. The rule, discovered by HPCWire, was published in March. According to the rule, Commerce's End-User Review Committee (ERC) believes that T-Platforms may be assisting the Russian government and military conduct nuclear research — which, given historical tensions between the two countries, apparently falls outside the bounds of permitted use. An email address that T-Platforms listed for its German office bounced, and Slashdot was unable to reach executives at its Russian headquarters for comment."
An anonymous reader writes "In 2008 Roadrunner was the world's fastest supercomputer. Now that the first system to break the petaflop barrier has lost a step on today's leaders it will be shut down and dismantled. In its five years of operation, the Roadrunner was the 'workhorse' behind the National Nuclear Security Administration's Advanced Simulation and Computing program, providing key computer simulations for the Stockpile Stewardship Program."