So Yeon (Tiffany) Min: Deep Profile Map

## 🎯 Current Position & Identity **Member of Technical Staff, Anthropic** (July 2025 - Present) *Recently completed Ph.D. in Machine Learning, Carnegie Mellon University (2025)* --- ## 🎓 Educational Foundation ### Carnegie Mellon University (2020-2025) - **Ph.D. in Machine Learning** - Machine Learning Department (MLD) - **Co-Advisors**: Prof. Ruslan Salakhutdinov & Prof. Yonatan Bisk - **Thesis Focus**: Embodied LLM Agents and Multimodal Spatial Representations ### Massachusetts Institute of Technology (2014-2020) - **M.Eng in EECS** (2018-2020) - *Advisor: Prof. Peter Szolovits* - **B.S. in EECS** (2014-2018) - **🏆 Charles & Jennifer Johnson Artificial Intelligence and Decision-Making MEng Thesis Award (2nd Place, 2021)** - Thesis: "Towards Knowledge-Based, Robust Question Answering" --- ## 🏆 Recognition & Awards - **2023 Apple Scholar in AI/ML** - One of 22 selected globally for innovative research and thought leadership - **Charles & Jennifer Johnson Thesis Award at MIT** (2021) - **Featured multiple times in CMU School of Computer Science news** - **494+ Google Scholar citations** as of 2025 --- ## 🔬 Research Trajectory & Key Contributions ### Core Research Areas - **Embodied AI** - Physical agents that understand and act in real environments - **LLM Agents** - Language model systems that perform complex reasoning and actions - **Non-parametric Memory Systems** - Dynamic memory architectures for persistent agent cognition - **Multimodal Spatial Representations** - Bridging vision, language, and physical understanding ### Flagship Research Projects #### **Embodied-RAG** (2024) *General non-parametric embodied memory for retrieval and generation* - Breakthrough in agent memory systems that enable persistent world understanding - Allows agents to store and retrieve embodied experiences dynamically #### **Tools Fail** (EMNLP 2024) *Detecting silent errors in faulty tools* - Critical work on fault tolerance in LLM agent ecosystems - Addresses reliability concerns for real-world deployment #### **SPRING** (NeurIPS 2023) *GPT-4 out-performs RL algorithms by studying papers and reasoning* - Demonstrated LLMs can outperform traditional RL by reading research papers - Pioneered paper-based learning for agent improvement #### **FILM** (ICLR 2022) *Following Instructions in Language with Modular Methods* - Foundational work on instruction-following in embodied environments - Established modular approaches to language-grounded robotics #### **Habitat 3.0** (ICLR 2024) *A co-habitat for humans, avatars and robots* - Major infrastructure contribution to embodied AI research - Enables mixed-reality testing of human-robot interactions --- ## 🏢 Professional Experience ### Industry Positions - **Anthropic** - Member of Technical Staff (2025-Present) - **Apple AI/ML** - Research Intern (2022) - **Meta FAIR** - Research Intern (Previous) ### Speaking & Academic Engagement - **Invited Talk** - ECCV 2024 Multimodal Agents Workshop: "Progress and Challenges in Non-parametric and Parametric Components of Embodied Agents" (Sep 2024) - **Invited Talk** - Yonsei University Vision and Learning Lab (Apr 2023) - **Invited Talk** - GIST Computer Vision Group (Jan 2022) --- ## 📊 Research Impact & Collaboration Network ### Publication Metrics - **20+ peer-reviewed publications** across top-tier venues - **Key venues**: ICLR, NeurIPS, ECCV, CVPR, EMNLP, IROS, RSS - **Extensive collaboration network** spanning CMU, MIT, Apple, Meta, Stanford ### Key Collaborators - **Ruslan Salakhutdinov** (CMU) - Primary PhD advisor - **Yonatan Bisk** (CMU) - Co-advisor specializing in embodied AI - **Peter Szolovits** (MIT) - MEng advisor, clinical NLP expert - **Roozbeh Mottaghi** (Meta FAIR) - Embodied AI research - **Jian Zhang** (Apple) - Computer vision and robotics ### Research Evolution Arc 1. **2018-2020**: Structured NLP & Clinical QA (MIT era) 2. **2020-2022**: Embodied instruction following (Early CMU) 3. **2022-2024**: Memory systems & fault tolerance (Advanced CMU) 4. **2024-2025**: Scalable embodied cognition (Pre-Anthropic) 5. **2025+**: Safe, interpretable AI systems (Anthropic era) --- ## 🎯 Technical Expertise ### AI/ML Specializations - **Embodied Cognition** - Agents that reason about physical environments - **Multimodal Learning** - Vision-language-action integration - **Memory Architectures** - Persistent, retrieval-based agent cognition - **Fault Detection** - Robust systems for real-world deployment - **Graph-Structured Reasoning** - Interpretable agent decision-making ### Methodological Strengths - **Modular System Design** - Breaking complex problems into interpretable components - **Cross-Modal Grounding** - Connecting language to physical understanding - **Evaluation Frameworks** - Designing metrics for embodied agent assessment - **Infrastructure Development** - Building scalable research platforms --- ## 🔮 Strategic Positioning & Future Trajectory ### Current Industry Position At **Anthropic**, Tiffany Min is positioned at the intersection of: - **Embodied AI research** advancing toward real-world applications - **Safety and alignment** research for advanced AI systems - **Agent reliability** and interpretability for trustworthy AI ### Likely Research Directions 1. **Safe embodied agents** that can operate reliably in human environments 2. **Interpretable memory systems** for transparent agent decision-making 3. **Human-AI collaboration** in physical and digital spaces 4. **Scalable evaluation** frameworks for embodied AI safety --- ## 💭 Personal & Professional Characteristics ### Professional Approach - **Research Philosophy**: Bridges theoretical rigor with practical application - **Collaboration Style**: Extensive cross-institutional partnerships - **Innovation Focus**: Memory, reliability, and interpretability in agent systems ### Digital Presence - **Personal Website**: [soyeonm.github.io](https://soyeonm.github.io/) - **Professional Profiles**: LinkedIn, Google Scholar, OpenReview - **Academic Engagement**: Regular conference presentations and invited talks *Note: Tiffany Min maintains a highly professional public presence focused on research contributions rather than personal details, reflecting the focused, mission-driven approach characteristic of leading AI researchers.* --- ## 🎯 Research Vision & Impact So Yeon (Tiffany) Min represents the next generation of AI researchers who are **bridging the gap between language understanding and embodied action**. Her work is positioning the field to move from "clever chatbots" to **persistent, situated cognitive systems** that can: 1. **Remember and learn** from embodied experiences 2. **Integrate symbolic and perceptual reasoning** 3. **Operate safely** in shared human environments 4. **Self-monitor and recover** from failures At Anthropic, she stands at the forefront of developing **trustworthy, interpretable embodied AI** that could underpin the next generation of domestic robots, field agents, and human-AI collaborative systems. --- ## 📚 Complete References & Resources ### 🏠 Primary Sources - **Personal Website**: [soyeonm.github.io](https://soyeonm.github.io/) - **Google Scholar Profile**: [scholar.google.com/citations?user=dkRTvvcAAAAJ](https://scholar.google.com/citations?user=dkRTvvcAAAAJ&hl=en) - **OpenReview Profile (CMU Era)**: [openreview.net/profile?id=~So_Yeon_Min2](https://openreview.net/profile?id=~So_Yeon_Min2) - **OpenReview Profile (MIT Era)**: [openreview.net/profile?id=~So_Yeon_Min1](https://openreview.net/profile?id=~So_Yeon_Min1) - **LinkedIn**: [linkedin.com/in/so-yeon-tiffany-min](https://www.linkedin.com/in/so-yeon-tiffany-min-70332aa5/) - **Twitter/X**: [@SoYeonTiffMin](https://twitter.com/soyeontiffmin) ### 🎓 Educational & Award References - **MIT EECS 2021 Awards**: [eecs.mit.edu/2021-eecs-awards](https://www.eecs.mit.edu/2021-eecs-awards/) - **CMU Apple Scholar Announcement**: [cs.cmu.edu/news/2023/2023-apple-scholar](https://www.cs.cmu.edu/news/2023/2023-apple-scholar) - **MIT CSAIL Profile**: [csail.mit.edu/person/so-yeon-min](https://www.csail.mit.edu/person/so-yeon-min) ### 📊 2025 Publications #### **Self-Regulation and Requesting Interventions** - **Status**: In Submission (2025) - **Project Page**: [soyeonm.github.io/self_reg/](https://soyeonm.github.io/self_reg/) - **Paper PDF**: [Google Drive Link](https://drive.google.com/file/d/17Suynjm_2Uf_dVBpFNS1H_ZErCuIW3UM/view?usp=sharing) - **Authors**: So Yeon Min, Yue Wu, Jimin Sun, Max Kaufmann, Fahim Tajwar, Yonatan Bisk, Ruslan Salakhutdinov - **arXiv**: [arxiv.org/abs/2502.04576](https://arxiv.org/abs/2502.04576) ### 📊 2024 Publications #### **Embodied-RAG: General Non-parametric Embodied Memory for Retrieval and Generation** - **Status**: In Submission (2024) - **arXiv**: [arxiv.org/abs/2409.18313](https://arxiv.org/abs/2409.18313) - **Authors**: So Yeon Min*, Quanting Xie*, Tianyi Zhang, Aarav Bajaj, Ruslan Salakhutdinov, Matthew Johnson-Roberson, Yonatan Bisk #### **Tools Fail: Detecting Silent Errors in Faulty Tools** - **Venue**: EMNLP 2024 - **arXiv**: [arxiv.org/abs/2406.19228](https://arxiv.org/abs/2406.19228) - **Authors**: Jimin Sun, So Yeon Min, Yingshan Chang, Yonatan Bisk #### **Situated Instruction Following** - **Venue**: ECCV 2024 - **Project Page**: [soyeonm.github.io/SIF_webpage/](https://soyeonm.github.io/SIF_webpage/) - **Authors**: So Yeon Min, Xavi Puig, Devendra Singh Chaplot, Tsung-Yen Yang, Akshara Rai, Priyam Parashar, Ruslan Salakhutdinov, Yonatan Bisk, Roozbeh Mottaghi #### **AgentKit: Flow Engineering with Graphs, not Coding** - **Venue**: COLM 2024 - **arXiv**: [arxiv.org/abs/2404.11483](https://arxiv.org/abs/2404.11483) - **Authors**: Yue Wu, Yewen Fan, So Yeon Min, Shrimai Prabhumoye, Stephen McAleer, Yonatan Bisk, Ruslan Salakhutdinov, Yuanzhi Li, Tom Mitchell #### **GOAT: Go to any thing** - **Venue**: RSS 2024 - **arXiv**: [arxiv.org/pdf/2311.06430.pdf](https://arxiv.org/pdf/2311.06430.pdf) - **Authors**: Matthew Chang, Theophile Gervet, Mukul Khanna, Sriram Yenamandra, Dhruv Shah, So Yeon Min, Kavit Shah, Chris Paxton, Saurabh Gupta, Dhruv Batra, Roozbeh Mottaghi, Jitendra Malik, Devendra Singh Chaplot #### **Habitat 3.0: A co-habitat for humans, avatars and robots** - **Venue**: ICLR 2024 - **arXiv**: [arxiv.org/pdf/2310.13724.pdf](https://arxiv.org/pdf/2310.13724.pdf) - **Authors**: Xavier Puig, Eric Undersander, Andrew Szot, Mikael Dallaire Cote, Tsung-Yen Yang, Ruslan Partsey, Ruta Desai, Alexander William Clegg, Michal Hlavac, So Yeon Min, Vladimír Vondruš, Theophile Gervet, Vincent-Pierre Berges, John M Turner, Oleksandr Maksymets, Zsolt Kira, Mrinal Kalakrishnan, Jitendra Malik, Devendra Singh Chaplot, Unnat Jain, Dhruv Batra, Akshara Rai, Roozbeh Mottaghi ### 📊 2023 Publications #### **SPRING: GPT-4 Out-performs RL Algorithms by Studying Papers and Reasoning** - **Venue**: NeurIPS 2023 - **arXiv**: [arxiv.org/pdf/2305.15486.pdf](https://arxiv.org/pdf/2305.15486.pdf) - **Authors**: Yue Wu, So Yeon Min, Shrimai Prabhumoye, Yonatan Bisk, Ruslan Salakhutdinov, Amos Azaria, Tom Mitchell, Yuanzhi Li #### **Plan, Eliminate, and Track–Language Models are Good Teachers for Embodied Agents** - **Status**: Arxiv Pre-print (2023) - **arXiv**: [arxiv.org/pdf/2305.02412.pdf](https://arxiv.org/pdf/2305.02412.pdf) - **Authors**: Yue Wu, So Yeon Min, Yonatan Bisk, Ruslan Salakhutdinov, Amos Azaria, Yuanzhi Li, Tom Mitchell, Shrimai Prabhumoye #### **Object Goal Navigation with End-to-End Self-Supervision** - **Venue**: IROS 2023 - **arXiv**: [arxiv.org/abs/2212.05923](https://arxiv.org/abs/2212.05923) - **Authors**: So Yeon Min, Yao-Hung Hubert Tsai, Wei Ding, Ali Farhadi, Ruslan Salakhutdinov, Yonatan Bisk, Jian Zhang #### **EXCALIBUR: Encouraging and Evaluating Embodied Exploration** - **Venue**: CVPR 2023 - **Paper**: [openaccess.thecvf.com/content/CVPR2023/papers/Zhu_EXCALIBUR_Encouraging_and_Evaluating_Embodied_Exploration_CVPR_2023_paper.pdf](https://openaccess.thecvf.com/content/CVPR2023/papers/Zhu_EXCALIBUR_Encouraging_and_Evaluating_Embodied_Exploration_CVPR_2023_paper.pdf) - **Authors**: Hao Zhu, Raghav Kapoor, So Yeon Min, Winson Han, Jiatai Li, Kaiwen Geng, Graham Neubig, Yonatan Bisk, Aniruddha Kembhavi, Luca Weihs ### 📊 2022 Publications #### **Don't Copy the Teacher: Data and Model Challenges in Embodied Dialogue** - **Venue**: EMNLP 2022 - **ACL Anthology**: [aclanthology.org/2022.emnlp-main.635/](https://aclanthology.org/2022.emnlp-main.635/) - **arXiv**: [arxiv.org/abs/2210.04443](https://arxiv.org/abs/2210.04443) - **Authors**: So Yeon Min, Hao Zhu, Yonatan Bisk, Ruslan Salakhutdinov #### **FILM: Following Instructions in Language with Modular Methods** - **Venue**: ICLR 2022 - **Project Page**: [soyeonm.github.io/FILM_webpage/](https://soyeonm.github.io/FILM_webpage/) - **Authors**: So Yeon Min, Devendra Chaplot, Pradeep Ravikumar, Yonatan Bisk, Ruslan Salakhutdinov #### **Tackling AlfWorld with Action Attention and Common Sense from Language Models** - **Venue**: Second Workshop on Language and Reinforcement Learning (2022) - **Authors**: Yue Wu, So Yeon Min, Yonatan Bisk, Ruslan Salakhutdinov, Shrimai Prabhumoye ### 📊 2020 Publications #### **Entity-Enriched Neural Models for Clinical Question Answering** - **Venue**: Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing - **ACL Anthology**: [aclweb.org/anthology/2020.bionlp-1.12/](https://www.aclweb.org/anthology/2020.bionlp-1.12/) - **arXiv**: [arxiv.org/abs/2005.06587](https://arxiv.org/abs/2005.06587) - **Authors**: Bhanu Pratap Singh Rawat, Wei-Hung Weng, So Yeon Min, Preethi Raghavan, Peter Szolovits #### **Advancing Seq2seq with Joint Paraphrase Learning** - **Venue**: Proceedings of the 3rd Clinical Natural Language Processing Workshop - **ACL Anthology**: [aclweb.org/anthology/2020.clinicalnlp-1.30/](https://www.aclweb.org/anthology/2020.clinicalnlp-1.30/) - **Authors**: So Yeon Min, Preethi Raghavan, Peter Szolovits #### **TransINT: Embedding Implication Rules in Knowledge Graphs with Isomorphic Intersections of Linear Subspaces** - **Venue**: Automated Knowledge Base Construction 2020 - **AKBC**: [akbc.ws/2020/virtual/poster_87.html](https://www.akbc.ws/2020/virtual/poster_87.html) - **Authors**: So Yeon Min, Preethi Raghavan, Peter Szolovits #### **Towards knowledge-based, robust question answering** - **Type**: Master's Thesis, MIT - **MIT DSpace**: [dspace.mit.edu/handle/1721.1/127462](https://dspace.mit.edu/bitstream/handle/1721.1/127462/1192966860-MIT.pdf?sequence=1&isAllowed=y) - **Author**: So Yeon Min ### 🎤 Talks & Presentations #### **ECCV 2024 Multimodal Agents Workshop** - **Date**: September 30, 2024 - **Topic**: "Progress and Challenges in Non-parametric and Parametric Components of Embodied Agents" - **Slides**: [Google Drive Link](https://drive.google.com/file/d/1zslH_NgcVIjd8CaNRfYMGzxYjj86TdCg/view?usp=sharing) - **Workshop Page**: [multimodalagents.github.io](https://multimodalagents.github.io/) #### **Yonsei University Vision and Learning Lab** - **Date**: April 10, 2023 - **Type**: Invited Talk #### **GIST Computer Vision Group** - **Date**: January 5, 2022 - **Type**: Invited Talk ### 🏢 Institutional References #### **Anthropic** - **Company Page**: [anthropic.com](https://www.anthropic.com/) - **Research Focus**: AI Safety and Alignment - **AI Watch Profile**: [aiwatch.issarice.com/?organization=Anthropic](https://aiwatch.issarice.com/?organization=Anthropic) #### **Carnegie Mellon University** - **Ruslan Salakhutdinov's Publications**: [cs.cmu.edu/~rsalakhu/publications.html](https://www.cs.cmu.edu/~rsalakhu/publications.html) - **Machine Learning Department**: [ml.cmu.edu](https://www.ml.cmu.edu/) #### **MIT EECS** - **Department Page**: [eecs.mit.edu](https://www.eecs.mit.edu/) - **CSAIL**: [csail.mit.edu](https://www.csail.mit.edu/) ### 🔬 Research Platforms & Tools #### **Habitat Platform** - **Project**: Embodied AI simulation platform - **Related**: Habitat 3.0 research #### **Apple AI/ML Scholars Program** - **Program**: Supports outstanding PhD students in AI/ML research - **Recognition**: 2023 Scholar ### 📱 Social & Professional Networks - **ResearchGate**: [researchgate.net/profile/So-Yeon-Min](https://www.researchgate.net/profile/So-Yeon-Min) - **DBLP**: Available through OpenReview profiles - **ACL Anthology**: Author page accessible through publications *Note: All links verified as of search date. Some institutional and conference pages may require academic access for full content.*

Post a Comment

0 Comments