Joel Niklaus is a Research Scientist at Harvey, where he focuses on developing and evaluating LLM systems in the legal domain. He also serves as a Lecturer at the Bern University of Applied Sciences, teaching continuous education courses on NLP. Prior to his current roles, Joel was an AI Resident at (Google) X, where he trained multi-billion parameter LLMs on hundreds of TPUs, achieving state-of-the-art performance on LegalBench. His experience also includes investigating efficient domain-specific pretraining approaches at Thomson Reuters Labs.
Joel’s academic journey led him to Stanford University, where he conducted research on LLMs in the legal domain under the supervision of Prof. Dan Ho and Prof. Percy Liang. He has served as an advisor to companies specializing in the applications of modern NLP to legal challenges and has led research projects for the Swiss Federal Supreme Court. With extensive experience in pretraining and finetuning LLMs for diverse tasks across various compute environments, His research primarily focuses on dataset curation to train and evaluate language models multilingually for the legal domain. His datasets have laid the groundwork for legal NLP in Switzerland.
Joel’s research has been published at leading Natural Language Processing and Machine Learning conferences, earning him an Outstanding Paper Award at ACL. He holds a PhD in Natural Language Processing, a Master’s in Data Science, and a Bachelor’s in Computer Science from the University of Bern.
JoelNiklaus JoelNiklaus JoelNiklaus JoelNiklaus 🤗 JoelNiklaus
I’m currently operating at near capacity with my existing commitments, but I am still open to consulting/advising on exceptional projects that pique my interest.