Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Random sketch learning for deep neural networks in edge computing

Abstract

Despite the great potential of deep neural networks (DNNs), they require massive weights and huge computational resources, creating a vast gap when deploying artificial intelligence at low-cost edge devices. Current lightweight DNNs, achieved by high-dimensional space pre-training and post-compression, present challenges when covering the resources deficit, making tiny artificial intelligence hard to be implemented. Here we report an architecture named random sketch learning, or Rosler, for computationally efficient tiny artificial intelligence. We build a universal compressing-while-training framework that directly learns a compact model and, most importantly, enables computationally efficient on-device learning. As validated on different models and datasets, it attains substantial memory reduction of ~50–90× (16-bits quantization), compared with fully connected DNNs. We demonstrate it on low-cost hardware, whereby the computation is accelerated by >180× and the energy consumption is reduced by ~10×. Our method paves the way for deploying tiny artificial intelligence in many scientific and industrial applications.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Rosler directly learns one compact tiny model.
Fig. 2: Computationally efficient model training.
Fig. 3: Test accuracy and computation/storage cost of Rosler.
Fig. 4: On-device federated learning in industrial IoT.
Fig. 5: Hardware demonstration of computationally efficient edge inference/training.

Similar content being viewed by others

Data availability

The bearing data (https://csegroups.case.edu/bearingdatacenter), the MNIST data (http://yann.lecun.com/exdb/mnist/), the CIFAR-10 data (https://www.cs.toronto.edu/kriz/cifar.html) and the Cat–dog data (https://www.kaggle.com/c/dogsvs-cats/data) can be all downloaded from the corresponding websites. Source Data for Figs. 25 is also available with this manuscript.

Code availability

A Python implementation of Rosler is available in Code Ocean52.

References

  1. Lecun, Y., Bengio, Y. & Hinton, G. E. Deep learning. Nature 521, 436–444 (2015).

    Article  Google Scholar 

  2. Silver, D. et al. Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016).

    Article  Google Scholar 

  3. Reichstein, M. et al. Deep learning and process understanding for data-driven Earth system science. Nature 566, 195–204 (2019).

    Article  Google Scholar 

  4. Jihong, P., Samarakoon, S., Mehdi, B. & Debba, M. Wireless network intelligence at the edge. Proc. IEEE 107, 2204–2239 (2019).

    Article  Google Scholar 

  5. Hiroshi, D. & Roberto, M. TinyML as-a-Service: What is it and what does it mean for the IoT Edge? Ericsson https://www.ericsson.com/en/blog/2019/12/tinyml-as-a-service-iot-edge (2019).

  6. Vaughan, O. Working on the edge. Nat. Electron. 2, 2–3 (2019).

    Article  Google Scholar 

  7. Burger, B. et al. A mobile robotic chemist. Nature 583, 237–241 (2020).

    Article  Google Scholar 

  8. Wang, J., Ma, Y., Zhang, L., Gao, R. X. & Wu, D. Deep learning for smart manufacturing: methods and applications. J. Manuf. Syst. 48, 144–156 (2018).

    Article  Google Scholar 

  9. Simons, F. J. et al. On the potential of recording earthquakes for global seismic tomography by low-cost autonomous instruments in the oceans. J. Geophys. Res. Solid Earth 114, B05307 (2009).

    Article  Google Scholar 

  10. Kiran, B. R. et al. Deep reinforcement learning for autonomous driving: a survey. IEEE Trans. Intell. Transport. Syst. https://doi.org/10.1109/TITS.2021.3054625 (2021).

  11. Weiss, B. A., Pellegrino, J., Justiniano, M. & Raghunatha, A. Measurement Science Roadmap for Prognostics and Health Management for Smart Manufacturing Systems (National Institute of Standards and Technology, 2016); https://doi.org/10.6028/NIST.AMS.100-2

  12. Smith, W. A. & Randall, R. B. Rolling element bearing diagnostics using the Case Western Reserve University data: a benchmark study. Mech. Syst. Signal Process. 64, 100–131 (2015).

    Article  Google Scholar 

  13. Hiroshi, D., Roberto, M. & Höller, J. Bringing machine learning to the deepest IoT edge with TinyML as-a-service. IEEE IoT Newsletter—March 2020 (2020).

  14. Hiroshi, D. & Roberto, M. TinyML as a service and the challenges of machine learning at the edge. Ericsson https://www.ericsson.com/en/blog/2019/12/tinyml-as-a-service (2019).

  15. Ward-Foxton, S. Adapting the microcontroller for AI in the endpoint. EE Times https://www.eetimes.com/adapting-the-microcontroller-for-ai-in-the-endpoint/ (2020).

  16. Loukides, M. TinyML: the challenges and opportunities of low-power ML applications. O’Reilly https://www.oreilly.com/radar/tinyml-the-challenges-and-opportunities-of-low-power-ml-applications/ (2019).

  17. Reddi, V. J. Enabling ultra-low power machine learning at the edge. In TinyML Summit 2020 (TinyML, 2020); https://cms.tinyml.org/wp-content/uploads/summit2020/tinyMLSummit2020-4-4-JanapaReddi.pdf

  18. Koehler, G. MNIST handwritten digit recognition in Keras. Nextjournal https://nextjournal.com/gkoehler/digit-recognition-with-keras (2020).

  19. Xu, X. et al. Scaling for edge inference of deep neural networks. Nat. Electron. 1, 216–222 (2018).

    Article  Google Scholar 

  20. Sze, V., Chen, Y. H., Yang, T. J. & Emer, J. S. Efficient processing of deep neural networks: a tutorial and survey. Proc. IEEE 105, 2295–2329 (2017).

    Article  Google Scholar 

  21. Gao, M., Pu, J., Yang, X., Horowitz, M. & Kozyrakis, C. Tetris: scalable and efficient neural network acceleration with 3D memory. In Proc. 22nd International Conference on Architectural Support for Programming Languages and Operating Systems Vol. 45, 751–764 (ACM, 2017).

  22. Li, C., Miao, H., Li, Y., Hao, J. & Xia, Q. Analogue signal and image processing with large memristor crossbars. Nat. Electron. 1, 52–59 (2018).

    Article  Google Scholar 

  23. Prezioso, M. et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature 521, 61–64 (2015).

    Article  Google Scholar 

  24. NVIDIA Tesla P100. NVIDIA www.nvidia.com/object/tesla-p100.html (2017).

  25. Han, S., Pool, J., Tran, J. & Dally, W. J. Learning both weights and connections for efficient neural networks. In Proc. Neural Information Processing Systems 1135–1143 (NIPS, 2015).

  26. Wen, W., Wu, C., Wang, Y., Chen, Y. & Li, H. Learning structured sparsity in deep neural networks. In Proc. Neural Information Processing Systems 2074–2082 (NIPS, 2016).

  27. Han, S., Mao, H. & Dally, W. J. Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding. In Proc. International Conference on Learning Representations 1–14 (ICLR, 2015).

  28. Frankle, J. & Carbin, M. The lottery ticket hypothesis: finding sparse, trainable neural networks. In Proc. International Conference on Learning Representations 1–42 (ICLR, 2018).

  29. Lee, N., Thalaiyasingam, A. & Torr, P. H. SNIP: single-shot network pruning based on connection sensitivity. In Proc. International Conference on Learning Representations 1–15 (ICLR, 2019).

  30. Denil, M., Shakibi, B., Dinh, L., Ranzato, M. & De Freitas, N. Predicting parameters in deep learning. In Proc. Neural Information Processing Systems 2148–2156 (NIPS, 2013).

  31. Jaderberg, M., Vedaldi, A. & Zisserman, A. Speeding up convolutional neural networks with low rank expansions. In Proc. British Machine Vision Conference 1–13 (BMVC, 2014).

  32. Zhou, T. & Tao, D. GoDec: randomized low-rank & sparse matrix decomposition in noisy case. In Proc. International Conference on Machine Learning 33–40 (ICML, 2011).

  33. Yu, X., Liu, T., Wang, X. & Tao, D. On compressing deep models by low rank and sparse decomposition. In Proc. International Conference on Computer Vision and Pattern Recognition 67–76 (CVPR, 2017).

  34. Lee, E. H., Miyashita, D., Chai, E., Murmann, B. & Wong, S. S. LogNet: energy-efficient neural networks using logarithmic computation. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing 5900–5904 (IEEE, 2017).

  35. Dong, X. & Yang, Y. Network pruning via transformable architecture search. In Proc. Neural Information Processing Systems 760–771 (NIPS, 2019).

  36. Guo, Y. et al. NAT: neural architecture transformer for accurate and compact architectures. In Proc. Neural Information Processing Systems 737–748 (NIPS, 2019).

  37. Blalock, D. W., Ortiz, J. J. G., Frankle, J. & Guttag, J. V. What is the state of neural network pruning? in Proceedings of Machine Learning and Systems 2020 (MLSys) 1-18 (2020).

  38. Yang, Q. et al. Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. 10, 1–19 (2019).

    Google Scholar 

  39. Bonawitz, K. et al. Practical secure aggregation for federated learning on user-held data. In Proc. Neural Information Processing Systems (NIPS, 2016).

  40. Silva, S., Gutman, B. A., Romero, E., Thompson, P. M. & Lorenzi, M. Federated learning in distributed medical databases: meta-analysis of large-scale subcortical brain data. In Proc. IEEE International Symposium on Biomedical Imaging 270–274 (IEEE, 2019).

  41. Mcmahan, H. B., Moore, E., Ramage, D., Hampson, S. & Agüera y Arcas, B. Communication-efficient learning of deep networks from decentralized data. In Proc. 20th International Conference on Artificial Intelligence and Statistics 1–11 (AISTATS, 2017).

  42. Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proc. International Conference on Learning Representations 1–14 (ICLR, 2015).

  43. Lym, S. et al. PruneTrain: fast neural network training by dynamic sparse model reconfiguration. In Proc. International Conference for High Performance Computing, Networking, Storage and Analysis 1–13 (ACM, 2019).

  44. Lu, Y., Huang, X., Zhang, K., Maharjan, S. & Zhang, Y. Low-latency federated learning and blockchain for edge association in digital twin empowered 6G networks. IEEE Trans. Industr. Inform. https://doi.org/10.1109/TII.2020.3017668 (2020).

  45. Brisimi, T. S. et al. Federated learning of predictive models from federated electronic health records. Int. J. Med. Inform. 112, 59–67 (2018).

    Article  Google Scholar 

  46. Glorot, X. & Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proc. Thirteenth International Conference on Artificial Intelligence and Statistics 249–256 (JMLR, 2010).

  47. Wang, S. & Zhang, Z. Improving CUR matrix decomposition and the nyström approximation via adaptive sampling. J. Mach. Learn. Res. 14, 2729–2769 (2013).

    MathSciNet  MATH  Google Scholar 

  48. Drineas, P., Mahoney, M. W. & Muthukrishnan, S. Relative-error CUR matrix decompositions. SIAM J. Matrix Anal. Appl. 30, 844–881 (2008).

    Article  MathSciNet  Google Scholar 

  49. Li, B. et al. Randomized approximate channel estimator in massive-MIMO communication. IEEE Commun. Lett. 24, 2314–2318 (2020).

    Article  Google Scholar 

  50. Li, B. et al. Fast-MUSIC for automotive massive-MIMO radar. Preprint at https://arxiv.org/abs/1911.07434 (2019).

  51. Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. In Proc. International Conference on Learning Representations 1–15 (ICLR, 2015).

  52. Li, B., Liu, H. & Chen, P. Random sketch learning for tiny AI. Code Ocean https://doi.org/10.24433/CO.5227764.v1 (2021).

Download references

Acknowledgements

This work was supported by the Major Scientific Instrument Development Plan of National Natural Science Foundation of China (NSFC) under grant no. 61827901, NSFC under grant no. U1805262, Major Research Plan of NSFC under grant no. 91738301 and Project of Basic Science Center of NSFC under grant no. 62088101.

Author information

Authors and Affiliations

Authors

Contributions

B.L. conceived the idea. B.L., P.C. and H.L. designed and implemented the source code. B.L., P.C., H.L., W.G. and X.C. analyzed the data. All the authors together interpreted the findings and wrote the paper. P.C. and H.L. contributed equally.

Corresponding authors

Correspondence to Bin Li or Xianbin Cao.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review information Nature Computational Science thanks Jingtong Hu, Xiaowei Xu and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Fernando Chirigati was the primary editor on this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary text, Figs. 1–6 and Tables 1 and 2.

Source data

Source Data Fig. 2

Raw data of 50 trails.

Source Data Fig. 3

Test accuracy and gain of memory/computation reduction.

Source Data Fig. 4

Test accuracy and gain of memory/computation reduction.

Source Data Fig. 5

Raw data of computation time and power.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, B., Chen, P., Liu, H. et al. Random sketch learning for deep neural networks in edge computing. Nat Comput Sci 1, 221–228 (2021). https://doi.org/10.1038/s43588-021-00039-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s43588-021-00039-6

This article is cited by

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics