Narek Maloyan
AI Research Engineer & PhD Candidate
GitHub · LinkedIn · Kaggle · Google Scholar · Email · X/Twitter · Resume PDF
I am an AI Research Engineer at Zencoder, where I build AI-powered coding agents. In May 2025, our team reached #1 on SWE-bench Verified with a 70% success rate, setting a new benchmark for autonomous software engineering. My day-to-day work sits at the intersection of large language models and developer tools — figuring out how to make AI agents that write reliable, production-quality code.
I am also a PhD candidate at Lomonosov Moscow State University, where my research focuses on AI safety and LLM security. I study prompt injection attacks across coding assistants, evaluation systems, and tool-integrated agents — essentially, how LLMs can be manipulated and how we can defend against it. This includes work on LLM-as-a-Judge vulnerabilities, MCP protocol security, and trojan detection in large language models. To date, I have co-authored 13 peer-reviewed publications spanning AI safety, LLM security, medical AI, and computer vision.
Before focusing on AI safety, I worked as an ML engineer across several domains: video highlights and recommendation systems at Viasat, medical article recommendations and speech-to-text at TrendMD, and MRI-based brain tumor classification at Burdenko Neurosurgery Institute. I have been teaching a graduate-level Deep Learning course at MSU since 2021, and I maintain open-source projects including manim-js (339+ stars), a TypeScript port of 3Blue1Brown's animation engine.
Experience
Teaching
Graduate-level course covering neural architectures, optimization, and practical applications.
Open-source reference guide covering key concepts across ML and data science.
TypeScript port of 3Blue1Brown's Manim for creating math animations on the web.
Selected Publications
- Prompt Injection Attacks on Agentic Coding Assistants. IJOIT 14(2), 2026. [paper]
- Breaking the Protocol: MCP Security Analysis. Modern IT and IT-education 21(3), 2026. [paper]
- Investigating LLM-as-a-Judge Vulnerability to Prompt Injection. IJOIT 13(9), 2025. [paper]
- Adversarial Attacks on LLM-as-a-Judge Systems. arXiv:2504.18333, 2025. [paper]
- Prompt Injection Attacks in Defended Systems. DCCN, 2024. [paper]
Blog
Contact
Available for full-time roles, research collaborations, consulting, and speaking engagements.
- [email protected]
- github.com/maloyan
- linkedin.com/in/nmaloyan
- x.com/NarekMaloyan
- Google Scholar
- kaggle.com/narek1110