Kevin Assogba

Department of Computer Science . RIT

prof_pic.jpg

kta7930 [at] rit.edu

20 Lomb Memorial Dr.

Rochester, NY 14623

Hello! I am a final year Ph.D. student at Rochester Institute of Technology (RIT) and a member of the High Performance Distributed Systems Lab. Advised by M. Mustafa Rafique and Bogdan Nicolae, my research aims to develop system software that optimize the scheduling and I/O profile of deep learning applications. I enjoy working on simple and complex software projects that span across diverse scientific and engineering fields including computational molecular dynamics, cosmology and transportation. I advocate for reproducible and open-source software. I have research experience in the following:

  • High-performance computing I/O
  • Data movement optimization
  • KV cache management for emerging LLMs
  • Performance modeling for large scale distributed systems
  • Scientific results reprodubility

News

Nov 2, 2023 Research paper titled Optimizing the Training of Co-Located Deep Learning Models Using Cache-Aware Staggering is accepted at IEEE HiPC conference in GOA, India, and nominated as Best Paper Finalist.

Media: Featured on ANL MCS News Article

Latest Publications

  1. NAACL ’25
    Bayelemabaga: Creating Resources for Bambara NLP
    Allahsera Auguste Tapo, Kevin Assogba, Christopher M Homan, and 2 more authors
    In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Apr 2025
  2. Middleware ’24
    Towards Affordable Reproducibility Using Scalable Capture and Comparison of Intermediate Multi-Run Results
    Nigel Tan, Kevin Assogba, Jay Asworth, and 5 more authors
    In 25th ACM/IFIP International Middleware Conference, Apr 2024
  3. HiPC ’23
    Optimizing the Training of Co-Located Deep Learning Models Using Cache-Aware Staggering
    Kevin AssogbaM. Mustafa Rafique, and Bogdan Nicolae
    In HIPC’23: 30th IEEE International Conference on High Performance Computing, Data, and Analytics, Dec 2023
  4. Cluster ’23
    PredictDDL: Reusable Workload Performance Prediction for Distributed Deep Learning
    Kevin Assogba*Eduardo Lima*M. Mustafa Rafique, and 1 more author
    In 2023 IEEE International Conference on Cluster Computing (CLUSTER), Oct 2023
  5. ICPP ’22
    Exploiting CXL-Based Memory for Distributed Deep Learning
    Moiz Arif, Kevin AssogbaM. Mustafa Rafique, and 1 more author
    In Proceedings of the 51st International Conference on Parallel Processing, Oct 2022