Hello! I am a third-year computer science PhD candidate at Harvard University,
advised by Professor Demba Ba. I am supported by the
Kempner Institute Graduate Fellowship.
My research interests are primarily in computer vision.
I've worked on interpretability and compression for large visual foundation models, and I have extensive experience training large transformer-based vision models.
I've also done work at the intersection of neuroscience and AI, building computational models of traveling waves to study how neurons transfer information.
Previously, I worked at the AI Institute in Dynamic Systems with
Nathan Kutz and
Ryan Raut.
I earned my B.S. in computer science from the Allen School at the
University of Washington, where I worked with Rajesh Rao
and William Noble.
Jacobs M.*, Fel T.*, Hakim R.*, Brondetta A., Ba D., Keller TA. (2025).We introduce the Block-Recurrent Hypothesis (BRH), arguing that trained ViTs admit a block-recurrent depth structure. To validate this, we train recurrent surrogates called Raptor. We demonstrate that a Raptor model can recover 96% of DINOv2 ImageNet-1k linear probe accuracy in only 2 blocks while maintaining equivalent computational cost.
LinkedIn: mozesjacobs
Email: mozesjacobs [at] g.harvard.edu