Intel Takes Aim at “Cool Technology”

When I last wrote about Intel, exactly 30 days ago, the company had yet to announce a replacement for outgoing CEO Paul Otellini, and there a was a lot of speculation about the company’s direction.

A lot can change in a month. On 2 May, Intel announced the promotion of 30-year Intel veteran Brian Krzanich to the chief executive role. And earlier this week, Reuters broke the news of a “sweeping” reorganization. Krzanich himself will now directly oversee most of the main product groups, including the company’s PC and mobile units. He has also formed a “new devices” group. Mobile chip guru and Palm and Apple veteran Mike Bell has reportedly been tapped to head it up.

What will this “new devices” unit do exactly? AllThingsD says it will focus at least in part on “ultra-mobile products” and quotes a statement from the company that “the group will be tasked with turning cool technology and business model innovations into products that shape and lead markets”. PCWorld speculates the new group will focus less on playing catch-up in the smartphone and tablet markets (which are still dominated by ARM-aligned companies) than on jazzier new products, such as Google Glass.

But Intel has invested a lot in its pursuit of the mobile market. Earlier this month—what a busy month!—the company unveiled Silvermont, a chip architecture that is optimized for power consumption. We’ll likely have to wait until at least the end of the year, when the first chips in the Silvermont family ship, to see whether all that hard work has paid off.

China’s Tianhe-2 Caps Top 10 Supercomputers

China's Tianhe-2 Caps Top 10 Supercomputers

Every year, in June and in November, the Top500 list shows which supercomputers can crank out the most calculations per second. This go-around, the number one system showed that all the rumors leading up to the reveal were true. The Tianhe-2, a massive system that clocked 33.86 petaflops, or 33.86 thousand trillion floating point operations per second, represents China’s return to the No. 1 spot—a distinction it has not held since November 2010, when its Tianhe-1A was considered the world’s finest computing system.

The Top500 list is typically topped by a U.S. Department of Energy machine. But Tianhe-2 trounces that department’s entrants, including the old top dog on the list, a supercomputer called Titan which is housed at the U.S. Department of Energy’s Oak Ridge National Laboratory. Titan executed 17.59 petaflops—a little over half of the Tianhe-2’s supercomputing muscle.

Built at China’s National University of Defense Technology, Tianhe-2 (also known as the Milky Way-2) consists of 16 000 nodes. Inside each node, two Intel Xeon IvyBridge processors and three Xeon Phi processors run the show, adding up to a total of 3.12 million computing cores. The machine is scheduled to be fully operational by the end of this year.

Tianhe-2’s surprise arrival symbolizes China’s unflinching commitment to the supercomputing arms race; the machine was not expected to be deployed until 2015. Moreover, it uses technologies that have almost all been invented in China, according to Top500 editor Jack Dongarra.

“Most of the features of the system were developed in China, and they are only using Intel for the main compute part. The interconnect, operating system, front-end processors and software are mainly Chinese,” he said in a statement. Dongarra saw the Tianhe-2 system in May, which led to a flurry of leaks about its tremendous power and capabilities earlier this month.

But the United States is holding fast to its overall dominance of the Top500 list: 253 of the 500 systems are still American-made. China comes in second place, claiming 65 systems on the list, followed by Japan, the U.K., France, and Germany.

With Tianhe-2 now at the top, Sequoia—an IBM BlueGene/Q system at the DOE’s Lawrence Livermore National Laboratory and formerly the world’s No. 2 supercomputer—dropped to third place. Sequoia, with its 1.57 million cores, first came online in 2011 and scored 17.17 petaflops on the Linpack benchmark. Three more IBM BlueGene/Q systems made the top 10 list, coming in fifth, seventh, and eighth places.

Fujitsu’s “K computer” installed at the RIKEN Advanced Insititute for Computational Science (AICS) in Kobe, Japan, sits at No. 4. The rest of the top 10 include: the upgraded Stampede at the Texas Advanced Computing Center of the University of Texas, Austin; JUQUEEN at the Forschungszentrum Juelich in Germany (the most powerful system in Europe); SuperMUC, an IBM iDataplex system installed at Leibniz Rechenzentrum in Germany; and Tianhe-1A at the National Supercomputing Center in Tianjin, China, holding steady at No. 10.

Most Efficient Supercomputers Ranked On Green500 List

Most Efficient Supercomputers Ranked On Green500 List

Twice a year, three Top500 lists come out to rank the world’s supercomputers by either their overall performance, their ability to parse enormous data sets, or their minimal environmental impact. And last week the Top500, Graph500 and Green500 were released at the SC13 conference in Denver.

For more than 20 years, the Top500 list has used analysis of brute force computing power, the number of floating-point operations each machine could process per second, to determine the rankings for supercomputers. But parsing data and reducing environmental impact are two trends that have led to the Graph500 and Green500 lists.

The Green500 list provides a window into how difficult it is to build a supercomputer that can compete in terms of efficiency and low energy consumption. Only one supercomputer on this year’s Green500, the Piz Dainte from the Swiss National Supercomputing Centre, is also ranked in the top 10 of the Top500.

Green supercomputers tend to be “heterogeneous,” meaning that their processor cores use multiple architectures that lend themselves to different computing tasks. Traditional processing elements like central processing units (CPUs) and graphical processing units (GPUs) are integrated with various coprocessors so any calculation the computer may perform can be done in the most efficient way possible.

The founder of the Green500 list, Virgina Tech computer scientist Wu Feng, told IEEE Spectrum in July that, “Overall, the performance of machines on the Green500 List has increased at a higher rate than their power consumption. That’s why the machines’ efficiencies are going up.” But he added that gains at the top of the list are more dramatic than the improvements down the rankings.

Scientists Confirm D-Wave’s Computer Chips Compute Using Quantum Mechanics

Scientists Confirm D-Wave's Computer Chips Compute Using Quantum Mechanics

A strategy of “show, don’t tell” for quantum computing seems to be paying off for Canadian company D-Wave. The latest validation for D-Wave’s quantum computer claims comes from a paper published in the 28 June edition of the journal Nature Communications.

Testing of the D-Wave chip—housed at the USC-Lockheed Martin Quantum Computing Center—suggested that the device does use quantum mechanics to solve optimization problems. Once quantum computers scale up to have enough processing power, they could prove much faster than classical computers in tackling certain problems, according to the new paper.

“Our work seems to show that, from a purely physical point of view, quantum effects play a functional role in information processing in the D-Wave processor,” says Sergio Boixo, a researcher who led the study while he was a research assistant professor in computer science at the University of Southern California, in a press release.

Most research labs have only succeeded in building quantum computing processors with just a few quantum bits (qubits). Unlike classical computing bits that exist as either a 1 or 0, qubits can exist in multiple states at the same time due to the strange rules of quantum physics that dominate reality at very small scales.

That’s why D-Wave initially drew skepticism for claiming to have built quantum processors with hundreds of qubits. But rather than follow research labs in trying to build general-purpose quantum computers, D-Wave has developed specialized quantum annealing devices for solving optimization problems.

The Canadian company has slowly won over some critics by giving independent researchers access to its D-Wave machines and inviting them to test its claims. One such test revealed that D-Wave machines could already beat classical computers in solving certain optimization problems.

D-Wave has also attracted notable tech giants as its first commercial customers. The company made its first commercial sale to Lockheed Martin in 2011, and has sold a second chip to Google for future installation at NASA’s Ames Research Center in Moffett Field, California.

Members of the University of Southern California team previously published a paper about D-Wave’s quantum computing device on the arXiv preprint server in April. Their new paper in Nature Communications—a test of a D-Wave “Rainier” chip with 108 functional qubits—may give former skeptics fresh hope that quantum computing has, in fact, become a reality.

The USC team has barely paused for breath in its race to study quantum computing. USC’s Quantum Computing Center received an upgrade to a new 512-qubit “Vesuvius” chip two months ago—the next machine up for a test drive.

Eurora Supercomputer Tops Green500 List

Tech Talk
Computing
Hardware

Eurora Supercomputer Tops Green500 List
By Davey Alba
Posted 3 Jul 2013 | 17:25 GMT

 Share
|
 Email
|
 Print

Conventionally, supercomputers have been ranked by number-crunching might, but not everyone agrees this is the most important metric to consider. These days, cars are evaluated by fuel consumed per kilometer, not their maximum speed. Supercomputers are following this trend, too. Last Friday marked the latest release of the biannual Green500 list, which grades these machines in a nontraditional way: by looking at their performance-per-watt. The list complements Top500, which sorts supercomputers by looking at how fast each machine can solve equations.

Notably, the top end of the Green500 list is dominated by “heterogeneous” supercomputers—supercomputers built around processor cores that combine a mix of architectures whose composition is suited to the different computing tasks the machine will execute. Various processing elements are fused together, such as traditional central processing units (CPUs), graphical processing units (GPUs), and other types of coprocessors.

Two new entrants—both heterogeneous systems based on NVIDIA’s Kepler K20 GPU accelerators—took the top two spots and cleared the bar of three billion floating-point operations per second (gigaflops) per watt. Eurora, installed at the CINECA Supercomputing Center in Italy, topped Green500 at 3.21 gigaflops/watt, and Aurora Tigon was a close second at 3.18 gigaflops/watt. The machines, both manufactured by Eurotech, bested the previous titleholder, University of Tennessee’s Beacon (2.45 gigaflops/watt), by 30 percent.

On the Top500 list, Eurora is ranked way down at No. 467, delivering 100.9 Teraflops per second on the Linpack benchmark. However, what’s interesting about the machine is not its peak performance, but rather its energy reuse system. Similar to the iDataCool solution IEEE Spectrum reported on last week, Eurora relies on a water cooling system to draw out excess heat; it then reuses this energy to drive adsorption chillers that cool the data center. It also redirects some of the waste heat to warm up other, human-occupied buildings at the facility.

On the other hand, Tianhe-2, which recently capped the Top500 list as the world’s fastest supercomputer, delivered 1.9 gigaflops/watt, largely due to the heterogeneous computing elements in its Intel Xeon Phi processors. This put it at No. 31 on Green500, and bespeaks the energy efficiency considerations many supercomputer manufacturers are now taking into account in building these systems.

According to Wu Feng, founder of Green500 and professor of computer science and electrical and computer engineering at Virginia Tech, the energy efficiencies of the highest-ranked machines on the Green500 list have been making much bigger gains than the have the mean or the median. “Overall, the performance of machines on the Green500 List has increased at a higher rate than their power consumption. That’s why the machines’ efficiencies are going up,” says Feng. “While the gains at the top end of the Green500 appear impressive, overall the improvements have been much more modest.”

But while there is still work to do, Feng believes that his team has made great strides overall in bringing green computing to the forefront of high-performance computing. “What we’ve done is raise awareness that energy efficiency is truly important,” says Feng. “I didn’t really feel like it got mainstream until 2007 or 2008… Up until that point, nobody really cared about energy efficiency; the only thing people cared about was speed. So it’s really come around full circle.”