Abstract
Finding an optimal set of nodes, called key players, whose activation (or removal) would maximally enhance (or degrade) a certain network functionality, is a fundamental class of problems in network science. Potential applications include network immunization, epidemic control, drug design and viral marketing. Due to their general NP-hard nature, these problems typically cannot be solved by exact algorithms with polynomial time complexity. Many approximate and heuristic strategies have been proposed to deal with specific application scenarios. Yet, we still lack a unified framework to efficiently solve this class of problems. Here, we introduce a deep reinforcement learning framework FINDER, which can be trained purely on small synthetic networks generated by toy models and then applied to a wide spectrum of application scenarios. Extensive experiments under various problem settings demonstrate that FINDER significantly outperforms existing methods in terms of solution quality. Moreover, it is several orders of magnitude faster than existing methods for large networks. The presented framework opens up a new direction of using deep learning techniques to understand the organizing principle of complex networks, which enables us to design more robust networks against both attacks and failures.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on Springer Link
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
Data availability
All the data analysed in this paper, including synthetic graphs and real-world networks, can be accessed through our Code Ocean compute capsule (https://doi.org/10.24433/CO.3005605.v1).
Code availability
All source codes and models (including those that can reproduce all figures and tables analysed in this work) are publicly available through our Code Ocean compute capsule (https://doi.org/10.24433/CO.3005605.v1) or on GitHub (https://github.com/FFrankyy/FINDER).
References
Albert, R. & Barabási, A.-L. Statistical mechanics of complex networks. Rev. Mod. Phys. 74, 47 (2002).
Newman, M. E. The structure and function of complex networks. SIAM Rev. 45, 167–256 (2003).
Morone, F. & Makse, H. A. Influence maximization in complex networks through optimal percolation. Nature 524, 65–68 (2015).
Kempe, D., Kleinberg, J. & Tardos, É. Influential nodes in a diffusion model for social networks. In International Colloquium on Automata, Languages and Programming 1127–1138 (Springer, 2005).
Corley, H. & David, Y. S. Most vital links and nodes in weighted networks. Oper. Res. Lett. 1, 157–160 (1982).
Borgatti, S. P. Identifying sets of key players in a social network. Comput. Math. Org. Theory 12, 21–34 (2006).
Lalou, M., Tahraoui, M. A. & Kheddouci, H. The critical node detection problem in networks: a survey. Comput. Sci. Rev. 28, 92–117 (2018).
Arulselvan, A., Commander, C. W., Elefteriadou, L. & Pardalos, P. M. Detecting critical nodes in sparse graphs. Comput. Oper. Res. 36, 2193–2200 (2009).
Kuntz, I. D. Structure-based strategies for drug design and discovery. Science 257, 1078–1082 (1992).
Vitoriano, B., Ortuño, M. T., Tirado, G. & Montero, J. A multi-criteria optimization model for humanitarian aid distribution. J. Global Optim. 51, 189–208 (2011).
Kempe, D., Kleinberg, J. & Tardos, É. Maximizing the spread of influence through a social network. In Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 137–146 (ACM, 2003).
Pastor-Satorras, R. & Vespignani, A. Epidemic spreading in scale-free networks. Phys. Rev. Lett. 86, 3200 (2001).
Cohen, R., Erez, K., Ben-Avraham, D. & Havlin, S. Breakdown of the internet under intentional attack. Phys. Rev. Lett. 86, 3682 (2001).
Braunstein, A., Dall’Asta, L., Semerjian, G. & Zdeborová, L. Network dismantling. Proc. Natl Acad. Sci. USA 113, 12368–12373 (2016).
Shen, Y., Nguyen, N. P., Xuan, Y. & Thai, M. T. On the discovery of critical links and nodes for assessing network vulnerability. IEEE/ACM Trans. Netw. 21, 963–973 (2013).
Mugisha, S. & Zhou, H.-J. Identifying optimal targets of network attack by belief propagation. Phys. Rev. E 94, 012305 (2016).
Zdeborová, L., Zhang, P. & Zhou, H.-J. Fast and simple decycling and dismantling of networks. Sci. Rep. 6, 37954 (2016).
Ren, X.-L., Gleinig, N., Helbing, D. & Antulov-Fantulin, N. Generalized network dismantling. Proc. Natl Acad. Sci. USA 116, 6554–6559 (2019).
Khalil, E., Dai, H., Zhang, Y., Dilkina, B. & Song, L. Learning combinatorial optimization algorithms over graphs. In Advances in Neural Information Processing Systems 6348–6358 (NIPS, 2017).
Nazari, M., Oroojlooy, A., Snyder, L. & Takác, M. Reinforcement learning for solving the vehicle routing problem. In Advances in Neural Information Processing Systems 9839–9849 (NIPS, 2018).
Bello, I., Pham, H., Le, Q. V., Norouzi, M. & Bengio, S. Neural combinatorial optimization with reinforcement learning. Preprint at https://arxiv.org/abs/1611.09940 (2016).
Bengio, Y., Lodi, A. & Prouvost, A. Machine learning for combinatorial optimization: a methodological tour d’horizon. Preprint at https://arxiv.org/abs/1811.06128 (2018).
James, J., Yu, W. & Gu, J. Online vehicle routing with neural combinatorial optimization and deep reinforcement learning. In IEEE Transactions on Intelligent Transportation Systems 1–12 (IEEE, 2019).
Li, Z., Chen, Q. & Koltun, V. Combinatorial optimization with graph convolutional networks and guided tree search. In Advances in Neural Information Processing Systems 539–548 (NIPS, 2018).
Hamilton, W., Ying, Z. & Leskovec, J. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems 1024–1034 (NIPS, 2017).
Brown, N. & Sandholm, T. Superhuman AI for heads-up no-limit poker: Libratus beats top professionals. Science 359, 418–424 (2018).
Silver, D. et al. Mastering the game of go without human knowledge. Nature 550, 354–359 (2017).
Moravčík, M. et al. Deepstack: expert-level artificial intelligence in heads-up no-limit poker. Science 356, 508–513 (2017).
Schneider, C. M., Moreira, A. A., Andrade, J. S., Havlin, S. & Herrmann, H. J. Mitigation of malicious attacks on networks. Proc. Natl Acad. Sci. USA 108, 3838–3841 (2011).
Henderson, K. et al. Rolx: structural role extraction & mining in large graphs. In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 1231–1239 (ACM, 2012).
Kipf, T. N. & Welling, M. Semi-supervised classification with graph convolutional networks. In Proceedings of the International Conference on Learning Representations (ICLR, 2017).
Lü, L., Zhang, Y.-C., Yeung, C. H. & Zhou, T. Leaders in social networks, the delicious case. PLoS ONE 6, e21202 (2011).
Wang, D., Cui, P. & Zhu, W. Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 1225–1234 (ACM, 2016).
Erdös, P. & Rényi, A. On random graphs. Publ. Math. Debrecen 6, 290–297 (1959).
Watts, D. J. & Strogatz, S. H. Collective dynamics of ‘small-world’ networks. Nature 393, 440–442 (1998).
Barabási, A.-L. & Albert, R. Emergence of scaling in random networks. Science 286, 509–512 (1999).
Barabási, A.-L. Network Science (Cambridge Univ. Press, 2016).
Clusella, P., Grassberger, P., Pérez-Reche, F. J. & Politi, A. Immunization and targeted destruction of networks using explosive percolation. Phys. Rev. Lett. 117, 208301 (2016).
Rossi, R. A. & Ahmed, N. K. The network data repository with interactive graph analytics and visualization. In Proceedings of 29th AAAI Conference on Artificial Intelligence 4292–4293 (ACM, 2015).
Acknowledgements
We are grateful to M. Chen and Z. Liu for the feedback and assistance they provided during the development and preparation of this research. This work is partially supported by NSF III-1705169, NSF CAREER Award 1741634, NSF 1937599, an Okawa Foundation Grant and an Amazon Research Award. C.F. is supported by the CSC Scholarship offered by the China Scholarship Council. Y.-Y.L. is supported by grants from the John Templeton Foundation (award no. 51977) and National Institutes of Health (R01AI141529, R01HD093761, UH3OD023268, U19AI095219 and U01HL089856).
Author information
Authors and Affiliations
Contributions
Y.S. and Y.-Y.L. designed and managed the project. Y.S. and C.F. developed the FINDER algorithm. C.F. and L.Z. performed all the calculations. All authors analysed the results. C.F., Y.-Y.L. and Y.S. wrote the manuscript. All authors edited the manuscript.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Supplementary Information
Finding key players in complex networks through deep reinforcement learnin.
Rights and permissions
About this article
Cite this article
Fan, C., Zeng, L., Sun, Y. et al. Finding key players in complex networks through deep reinforcement learning. Nat Mach Intell 2, 317–324 (2020). https://doi.org/10.1038/s42256-020-0177-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s42256-020-0177-2
This article is cited by
-
Identifying key players in complex networks via network entanglement
Communications Physics (2024)
-
Robustness and resilience of complex networks
Nature Reviews Physics (2024)
-
Deep reinforcement learning based on balanced stratified prioritized experience replay for customer credit scoring in peer-to-peer lending
Artificial Intelligence Review (2024)
-
Searching for spin glass ground states through deep reinforcement learning
Nature Communications (2023)
-
Spatial planning of urban communities via deep reinforcement learning
Nature Computational Science (2023)