AI Security Researcher with experience in LLM alignment and adversarial safety evaluation. Specialized in developing novel vulnerability assessment framework for multi-agent AI systems through AutoGen based experimental implementations. Experience leading a research team and delivering actionable insights on LLM robustness under real-world conditions. Currently finalizing research for AAAI 2026 submission (August 2025).
Overview
5
5
years of professional experience
Work History
Cyber Researcher - LLM Allignment Researcher
Cyber Ben Gurion (CBG), Beersheba
Beersheba
10.2024 - Current
Conducted systematic literature reviews on adversarial attacks and safety vulnerabilities in LLM systems.
Synthesized research findings to identify critical gaps in current LLM safety evaluation methodologies.
Supervised a team of three BSc students, facilitating collaboration and technical skill development.
Designed a novel evaluation framework for testing LLM alignment robustness under emergent agentic behavior.
Developing novel jailbreaking methodologies combining emergent behavior with semantic actor networks for AAAI 2026 submission.
Implemented multi-agent frameworks using AutoGen for systematic evaluation of LLM safety boundaries and alignment robustness under different real world conditions.
Gained extensive hands-on experience with LLM fine-tuning, alignment techniques, and Python-based ML frameworks.
Research Assistant - Geolocation
Cyber Ben Gurion (CBG), Beersheba
Beersheba
10.2020 - 10.2024
Designed experiments to test hypotheses, improving accuracy and reliability of research results.
Planned network simulations using NetworkX Python framework to enhance experiment design.
Developed experimental setups with NS3 (C++) to evaluate various networking scenarios.
Collaborated in a team environment, responding directly to professor and master student inquiries.