The new protocol sends data at 7 gigabytes a minute. Credit: © GettyImages

A new piece of software more than trebles the speed at which information can be sent over the Internet. It changes the way that computers monitor and respond to online traffic conditions.

Steven Low, of the California Institute of Technology in Pasadena, and his colleagues have sent data 3,500 times faster than a typical broadband connection - at about 7 gigabytes a minute. This is a new speed record for data transfer.

They designed the software with physicists in mind. But it could help biologists, engineers and medics swap information. And it might one day distribute online films to cinemas or homes. At top speed, the new technique could send a DVD movie in less than five seconds.

"We hope to have a version ready for public release by the summer," says Low. He plans to make the software freely available for all to use and adapt.

Congestion charge

When they smash particles together, labs such as the Stanford Linear Accelerator Center (SLAC) in California create vast amounts of data. "It is doubling each year," says Les Cottrell, assistant director of computer services at SLAC.

A single file can be 1,000 gigabytes in size, and can take a day to send to collaborators around the world. Scientists are disadvantaged if they have to wait, Cottrell explains: "They don't feel part of the experiment."

Physics labs have high-speed connections and immense computing power. But their networks operate at less than a third of their maximum efficiency.

The problem is the Transmission Control Protocol (TCP), which manages the data that flows between computers. The TCP chops information into little packages - the receiving and sending computers communicate to check that all the packages have arrived correctly, and the sender re-sends those that didn't.

If scientists have to wait for data they don't feel part of the experiment Les Cottrell , SLAC

If it detects many errors, the TCP deduces that the network is congested. It halves the sending rate, and begins edging back up towards the maximum.

This worked fine for the Internet of the late 1980s, when the TCP was invented. But it copes less well with powerful twenty-first-century networks. "The adaptation is too drastic," Low explains. "The speed jumps around from too high to too low."

It's like driving a car by flooring the accelerator for as long as you can, and then stamping on the brake when you hit traffic.

FAST talker

Caltech's alternative is called FAST, for Fast Active queue-management Scalable TCP. It detects congestion by measuring the delay between sending a packet of data and receiving an acknowledgement. As this delay increases, it eases off - just a little.

This deals with congestion before the error rate rises. "It allows you to adapt more smoothly," says Low. In tests using existing hardware and networks, FAST has run the international links between labs at more than 95% efficiency.

We hope to have a version ready for public release by the summer Steven Low , CalTech

SLAC already uses tricks to increase its transmission rates, such as sending several data streams at once. But this is system is prone to breaking down. "FAST is a big simplification on how we do things now, and that's a major advance," says Cottrell.

Low's team is not yet ready to unleash its creation on the open Internet. Online traffic is a balance of many different information flows, and the team still needs to ensure that FAST will not hog the information superhighway at other users' expense.