Using LLMs for User Insight and Better Interaction Flows

This past week I had the opportunity to attend and speak at CascadiaJS in Seattle, and one theme was clear: LLMs and user interfaces are converging fast.

Several talks highlighted advances in client-side LLM applications, including lightweight models like Gemma 3 that can run locally. These don’t yet compete with the GPT series in capability, but they represent important first steps toward reducing costs and improving user privacy—two critical factors for real-world adoption.

Talk at CascadiaJS

From Research to Practice: LLMs as Classifiers for UX

Over the past year, I’ve been exploring how LLM-based classifiers can help us better understand user interactions in chat data. At CascadiaJS, I shared how this technique can extend to user interfaces more broadly.

A great foundation for this work comes from a Microsoft Research paper: TnT-LLM: Text Mining at Scale with Large Language Models. The paper not only explains the process but also shares example prompts, making it easy to replicate.

The core idea: use LLMs iteratively to generate, refine, and finalize a taxonomy of labels from raw conversational data. This taxonomy can then power real-time classification of new user interactions.


Hands-On: NodeJS Example

To make this concrete, I built a NodeJS implementation that works with Azure OpenAI or locally hosted models via Ollama.

Here’s the recipe in simple steps:

  1. Get data — you can use open datasets like WildChat.
  2. Reduce chat size — truncate tokens or summarize each chat with an LLM.
  3. Batch summaries — ~100 summaries per batch works well.
  4. Seed labels — ask the LLM to generate a list of labels + descriptions for the first batch.
  5. Iterate — feed that list into subsequent batches, refining as new themes emerge.
  6. Generalize — continue looping until you’ve built a broad, representative label set.
  7. Finalize taxonomy — enforce requirements like max cluster length, non-overlap, or consistency.

At the end, you’ve got a label taxonomy that can classify new interactions—either via API or locally.


Why This Matters for User Flows

Once we have real-time classification, we can start reshaping user flows dynamically. For example:

  • Detect intent and serve relevant forms, buttons, or content.
  • Personalize ads, recommendations, or onboarding steps.
  • Adapt UI elements based on interaction style (exploratory vs. task-driven).

This bridges the gap between raw user behavior and adaptive interfaces.


Beyond Screens: Physical Interactions

One of the most exciting demos at CascadiaJS came from Charlie Gerard, who showcased a motion control application.

It made me wonder: what if we combined motion input with LLM-based classifiers? Imagine:

  • Detecting user gestures and mapping them to intent.
  • Triggering physical responses—like lights, audio, or IoT devices.
  • Creating multimodal interactions where LLMs interpret both text and actions.

This opens up a huge design space where interfaces respond holistically to users, not just their words.


Closing Thoughts

We’re just scratching the surface of how LLM-powered classifiers can reshape user insights and interaction design. Whether on the client-side for privacy and speed, or in hybrid flows with APIs, the taxonomy approach offers a lightweight, flexible way to structure user behavior data.

And when we extend this to physical interactions, we move closer to a future where interfaces don’t just wait for input—they understand, adapt, and respond in real time.


👉 If you’re curious to try it out, check out the GitHub repo. I’d love feedback or ideas on how to expand this into motion, multimodal, or IoT-driven applications with the web.