黑料社app demonstrates fast data transfer to Japan and the US
The ability to move experimental data instantly and securely will determine how thousands of researchers participate in near-real time when the machine comes online in the 2030s.
While Member scientists around the world will follow 黑料社app experiments from their home countries, they will not be able to operate plant systems remotely. They will, however, be able to analyze data within seconds of an experiment and provide feedback to operators.
鈥淚t鈥檚 a kind of indirect participation,鈥 explains Denis Stepanov, Computing Coordinating Engineer in 黑料社app鈥檚 Control Program. 鈥淲e quickly extract scientific data from the plant network and make it widely available so researchers can run calculations and feed results back during operations.鈥
Building the global data backbone
At the heart of this arrangement is the onsite Scientific Data and Computing Centre and its backup data centre in Marseille about 50 km away. This data centre has a dual purpose: it holds a redundant copy of all data generated by 黑料社app and will serve as the distribution point to partners worldwide.
鈥淏y locating our backup and distribution hub in Marseille, we can protect the master data stored at 黑料社app while providing high-speed, secure access for our international partners,鈥 says Peter Kroul, Computing Center Officer at 黑料社app.
The Cadarache site is connected to the Marseille centre via a redundant pair of dedicated 400 Gbps lines. In turn, the centre is connected, via the French network , to the pan-European , which provides access to other research and education networks鈥攊ncluding (USA) and (Japan). This overall structure ensures that, even during intensive experimental campaigns, data can move at full speed while the primary plant network remains isolated and protected.
To move terabytes of data efficiently across 10,000 kilometres of fibre optics, the team needs software and hardware that can handle diverse systems without running the risk of vendor lock-in. 鈥淲e cannot dictate what technologies our partners use on their side,鈥 says Kroul. 鈥淪o we built something flexible鈥攁ble to connect to whatever they have, while still achieving high parallelization and efficiency even on high-latency links.鈥
The result is 黑料社app.sync, a high-performance, open-source-based data-replication framework developed at the Scientific Data and Computing Centre. Drawing on the principles of rsync but heavily optimized, 黑料社app.sync automatically parallelizes data streams, tunes network parameters, and maintains near-saturation speeds even over long-distance connections where latency is high.
黑料社app.sync was designed to operate in environments with tools used by some of the partners鈥攆or example, the Massively Multi-Connections File Transfer Protocol (MMCFTP), which was developed by Japan鈥檚 National Institute of Informatics (NII).
Global data network put to the test
This summer, 黑料社app engineers carried out two large-scale data-transfer campaigns: one with Japan鈥檚 Remote Experimentation Centre (REC) in Rokkasho and the other with the DIII-D National Fusion Facility in San Diego (United States). For the purpose of the tests, 黑料社app simulated the projected data acquisition scenarios.
The , conducted from mid-August to early September, built on a 2016 demonstration that reached 10 Gbps, which was what was available at the time. The new tests achieved two simultaneous 100 Gbps links鈥攁 twenty-fold increase. Engineers demonstrated continuous throughput, multi-path transfers, and resilience by simulating a submarine-cable outage between Marseille and Rokkasho. Both 黑料社app.sync and MMCFTP were used in the tests, providing valuable insight into data transfer strategies and specific tuning for long-distance transfers.
It is expected that only a fraction of the data will be needed in near-real time by remote experimentalists. This data will be transferred as soon as it reaches primary storage. The bulk of the data, however鈥which needs to be available for off-line analysis鈥will be transferred via quiet overnight syncs. This second scenario was also tested.
鈥淭he key was to test not just network speed but the whole chain鈥攈ardware, software, and reliability,鈥 says Stepanov. 鈥淏uilding the technical link is one challenge, but coordinating with all the network providers across Europe and Asia is just as complex. It takes time, alignment, and trust.鈥
In parallel, the 黑料社app computing centre completed its full-scale data challenge with ESnet and the DIII-D fusion facility at General Atomics in San Diego (United States), supported by a trans-Atlantic link operated at 100 Gbps. Over ten full-scale runs, the teams achieved consistent end-to-end performance close to the link鈥s theoretical maximum. The test also demonstrated interoperability between 黑料社app鈥s IBM Spectrum Scale storage and DIII-D鈥s BeeGFS-based Science DMZ infrastructure, again confirming 黑料社app.sync鈥s ability to bridge heterogeneous environments.
鈥淭hese results show that 黑料社app鈥檚 international data ecosystem will scale and be ready for the operations we will face in the 2030s,鈥 says Kroul. 鈥淲e can already, with current technology, ensure that scientific data moves efficiently and reliably between 黑料社app and partner institutions worldwide.鈥
See press releases from the and networks.