Adrian de Wynter
.
I am a principal applied scientist at Microsoft and a researcher (PGR) at the University of York. I work in projects related to natural language understanding/generation and fundamental problems in deep learning, such as reasoning and formal modelling of dialogue, like LLMs.
At Microsoft my work involves leading, designing, and deploying Word- and Office- AI features and research. These deal with composition (what you see when you type in Word), multilinguality (e.g., expanding products to new markets), measurement (reasoning, automated evaluation), personalisation, and other workstreams. Yes, I also work on buzzwords like 'agentic workflows'. Most of the work here, you can see it in Word Copilot.
My primary research interest is reasoning as it relates to language in humans and machines. Lately I have focused on LLM-based reasoning capabilities (e.g. here, here, and here). My theoretical work is intuitionistic: algorithms have guarantees of complexity and convergence via constructive proofs, and must relate to a realistic (e.g. production) scenario. This gives meaningful answers about complex problems.
For example, we used category theory to prove that some prompting strategies are objectively better than others; and that they would produce more preferrable outcomes by users (and ended up being a product in Word). I also recently wrote an algorithm with cryptographic guarantees for determining trust in LLMs-as-judges.
In earlier work I showed that finding a globally optimal solution to model compression is undecidable, but proved that polynomial-time approximation algorithms exist--and applied these results to BERT and reaching a (then) state-of-the-art on model compression. This last contribution was later adapted for quantum circuit optimisation in work at ORNL. I also showed (bridging learning theory and TDA) how (and when) LLM-based data augmentation works.
My other research interests relate to recreational mathematics (games), preserving endangered languages, and computational social science. In the latter one I have worked on mitigating toxicity and other harms of LLMs, research on LLM research, and the very first study of the impact of ChatGPT on loneliness.
Incidentally, I am now on Twitter.
Last updated: July '25.
I've found it useful to have a series of "posts" on the work I do, to make it more accessible and share my passion for mathematics, especially since I don't have any social media (does LinkedIn count?)
I'm absolutely terrible at updating this site (record: 2 years), so bear with me.
Following Larry Wasserman's essay, I invite comments on the papers below. Feel free to email me.
For a longer, complete list of works see here.
For how to handle my last name's weird spelling rules, see here.
Some media coverage of the work I do, in case my posts remain as confusing as the original papers.
Contact: first-initial-full-last-name-including-tussenvoegsel (at) microsoft.com
Factoid: my ORCID (326797241) is a prime number; it is expressible as the sum of two squares (1715 and 17996); and it is the square root (hypothenuse) of the sum of two squares (61726280 and 320914791). Yay.