About me
Iβm a Second-year PhD student at the Computer Science department of UC Riverside, where I am very fortunate to be advised by Prof. Nael Abu-Ghazaleh and Prof. Yue Dong.
My research interests are oriented toward Generative AI and Trustworthiness, particularly studying Large (Vision) Language Models (LLMs & LVLMs) and Multi-Modal Foundation Models with an emphasis on alignment, security, and reliability, all aimed at achieving scalability. Iβm also investigating the parameter-efficient training and inference of Multi-Modal Models in multi-user scenarios for personalization purposes and exploring Retrieval Augmentation Generation (RAG) strategies to improve reasoning and reliability against adversaries. I am deeply passionate about integrating these intricate models into advanced systems with real-world applications.
Honestly, the Dark Side of AI π has always been very attractive to me, thatβs why I love attacking these models from an adversarial perspective (Dopamine Rush ππ§¨) to find the vulnerabilities of current alignment and defense startegies with the goal of developing more robust systems.
I never limit myself, enjoying the pursuit of interdisciplinary research, and Iβm enthusiastic about exploring innovative concepts. Letβs collaborate π
News β¬οΈ (Scroll down)
- Summer 2024: I will be doing an internship at Microsoft Research in Summer 2024! (Thrilled π₯π¨π»βπ»)
- Mar 2024: My work on Cross-Modal Vulnerability Alignment in Vision-Language Models was accepted for a presentation at SuperAGI Leap Summit 2024! [Video] [SuperAGI]
- Mar 2024: Our paper "That Doesn't Go There: Attacks on Shared State in Multi-User Augmented Reality Applications" has been accepted to USENIX SECURITY 2024! [paper]
- Feb 2024: Gave a lightning talk on my AI Safety work at Cohere For AI! [Slides]
- Jan 2024: ππ₯ Our paper "Jailbreak in Pieces: Compositional Adversarial Attacks on Multi-Modal Language Models" was accepted for "Spotlight presentation(top 5% of 7262 submissions) at ICLR2024! [OpenReview] [SlidesLive-Video] [YoutubeAInews]
- Nov 2023: π Our paper "Jailbreak in Pieces: Compositional Adversarial Attacks on Multi-Modal Language Models" won the "Best Paper Award" at SoCal NLP 2023! [paper] [Award] [News1] [News2] [News3]
- Sep 2023: Our paper "Vulnerabilities of Large Language Models to Adversarial Attacks" has been accepted for a tutorial to ACL2024! [paper]
- Jul 2023: Yay! I did my own first paper :D! "Plug and Pray: Exploiting off-the-shelf components of Multi-Modal Models" [paper]
- Apr 2023: I will be serving as the moderator & evaluator of student presentations at UGRS2023! [paper]
Education
Ph.D. in Computer Science at University of California, Riverside (Sep2022-Present)
B.Sc. in Electrical Engineering at Sharif University of Technology (2017-2022)
Ranked 68th among 150,000 participants in Iran Nationwide University Entrance Exam (Konkur)