My research interests are oriented toward Generative AI and Trustworthiness, particularly studying Large (Vision) Language Models (LLMs & LVLMs) and Multi-Modal Foundation Models with an emphasis on alignment, security, and reliability, all aimed at achieving scalability. I’m also investigating the parameter-efficient training and inference of Multi-Modal Models in multi-user scenarios for personalization purposes and exploring Retrieval Augmentation Generation (RAG) strategies to improve reasoning and reliability against adversaries. I am deeply passionate about integrating these intricate models into advanced systems with real-world applications.
Honestly, the Dark Side of AI 😈 has always been very attractive to me, that’s why I love attacking these models from an adversarial perspective (Dopamine Rush 🌊🧨) to find the vulnerabilities of current alignment and defense startegies with the goal of developing more robust systems.
I never limit myself, enjoying the pursuit of interdisciplinary research, and I’m enthusiastic about exploring innovative concepts. Let’s collaborate 😄
September 2023: “Vulnerabilities of Large Language Models to Adversarial Attacks” was accepted to ACL 2024!, Wow!🤩
July 2023: I did my own first paper :D, Plug and Pray: Exploiting off-the-shelf components of Multi-Modal Models, check it out!
April 2023: I will be serving as the moderator & evaluator of student presentations at UGRS2023!
Ph.D. in Computer Science at University of California, Riverside (2022-)
B.Sc. in Electrical Engineering at Sharif University of Technology (2017-2022)
Ranked 68th among 150,000 participants in Iran Nationwide University Entrance Exam (Konkur)