Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
In January 2022, Nature Computational Science is celebrating its one-year anniversary. The first year of the journal spoke to the multidisciplinary aspect of computational science research, as we published a truly diverse set of inspiring articles from multiple different domains. With this collection, we highlight some of the research articles, published during our first year, that reported stimulating ideas, methods and results in many different science areas, including biological sciences, physical sciences, and environmental sciences.
Through parametric sensitivity analysis and uncertainty quantification of the CovidSim model, a subset of this model’s parameters is identified to which the code output is most sensitive. Using these allows better and more informed decisions about proposed policies.
Using a statistical method for transient correlations, the waxing and waning in levels of population infection by SARS-CoV-2 are shown to respond to temperature and absolute humidity, across geographical locations and for different temporal and spatial resolutions.
Deep graph neural networks can refine a predicted protein model efficiently with less computing resources. The accuracy is comparable to that of the leading physics-based methods that rely on time-consuming conformation sampling.
Spiking neural network simulations are very memory-intensive, limiting large-scale brain simulations to high-performance computer systems. Knight and Nowotny propose using procedural connectivity to substantially reduce the memory footprint of these models, such that they can run on standard GPUs.
Multi-fidelity graph networks learn more effective representations for materials from large data sets of low-fidelity properties, which can then be used to make accurate predictions of high-fidelity properties, such as the band gaps of ordered and disordered crystals and energies of molecules.
A class of quantum neural networks is presented that outperforms comparable classical feedforward networks. They achieve a higher capacity in terms of effective dimension and at the same time train faster, suggesting a quantum advantage.
This work demonstrates that large gains still exist in accelerating and improving the coverage of reaction prediction algorithms. These advances create opportunities for computationally exploring deeper and broader reaction networks.
Combining bioinformatics data and atomistic simulations, this study develops a sequence-dependent coarse-grained model for biomolecular phase separation. This model achieves a quantitative agreement with experimental observations. Extensive benchmarks exemplify its performance.
This work proposes a probabilistic graphical model as a formal mathematical foundation for digital twins, and demonstrates how this model supports principled data assimilation, optimal control and end-to-end uncertainty quantification.
Climate data are often stored at higher precision than is needed. The proposed compression automatically determines the precision from the data’s bitwise real information, removing any false information and leading to a more efficient compression.
An analysis of GPS pedestrian traces shows that (1) people increasingly deviate from the shortest path when the distance between origin and destination increases and that (2) chosen paths are statistically different when origin and destination are swapped. Ultimately, this can explain the observed human attitude in selecting different paths upon return trips.