LLMs Enable Iterative ML

Written by:

After working on LLM applications for several months now, I am still amazed at how well these models work. The reasoning and awareness from these models are absolutely something out of a sci-fi movie. Just last week, Google DeepMind has come out recently suggesting that GPT-4 has on-par emotional and social intelligence to humans – that means GPT-4 might have a better EQ than many people I know, including myself šŸ˜…. It’s impressive how well generative outputs can be from working code generation to composition and recommendation systems, but these may not be the best improvements these types of models have made for Data Scientists.

One of the most significant advantages these models bring to data science workflows is their ability to resolve major pain points in traditional machine learning pipelines. In typical DS workflows, there is a lot of work to go from a hypothesis to conclusions, typically spanning entire disciplines like Data Engineering, Data Science and ML Ops. This usually takes a lot of time, which can add extra stress to a young start up, ultimately positioning many experiments as  a ā€œmake it or break itā€ moment because of how many resources were used and how little runway is left. 

Unfortunately, a lot of this work is done prior to having any real user feedback. What if the ML pipeline is rendered moot because a user doesn’t engage with it? I bet many folks have had similar experiences to this: at one of my previous companies, we spent months whipping up a forecasting algorithm to find out that our users were just using PowerBI to download Excel sheets to do a simple moving average. We need to understand what our users are trying to do within our software so that we can meet them where they are to provide a seamless interface.

We can circumvent a lot of the pain in standard ML workflows using models like GPT-4 to short-circuit the amount of sprints it takes for us to get real user feedback. We can use clever prompt engineering techniques like Chain of Thought, ReACT, and may other LLMs techniques to help us: 

  1. Prototype really fast. Using popular interfaces like LangChain, it’s seamless to prototype and productionalize prompt-based models. At my current job, I’ve pushed eight models in the last 5 months, ranging from topic modeling, sentiment analysis and classification. Of course there are commonalities with our traditional DS workflows like all the data wrangling, but I was able to iterate quickly, see what was working and what wasn’t, and most importantly seek feedback from my team.
  1. Solidify the requirements of the product. With a crude model, I could whip up a demo in a Jupyter Notebook and start a conversation with my boss or teammates, who would opine on how well the model was working. Often, this interaction uncovers implicit requirements that weren’t totally clear at the beginning of the project. These can be subtle dependencies, nuances in the problem space, or relevant business applications. But since it’s so easy to prototype, we can pivot live and continue refining the scope for what we hope the models to do.
  1. Collect user feedback. Now that we’ve internally pressure tested the problem space and initial model, we deploy this to our users. We use LangSmith to collect feedback on every prompt we have, allowing us to see trends logged over time for any model. This lets us see if a user engages with the model, and collect additional feedback such as a rating on a 5-point scale.

I’m a huge fan of Don Norman’s Design of Everyday Things, where he talks all about how good product design enables people to use a system to accomplish their goals. In my professional life, LLMs act as a tool for me to get rapid feedback. As I mull over a problem, I can play quickly (and cheaply now, thanks to GPT-4o), which lets us collect a ton of feedback. Using LLMs like this allows us to weave elements of iterative design into ML pipelines where the models evolve with each additional wave of feedback. Using LLMs in this pattern allows us to meet the user months ahead of schedules I’ve seen in other companies, saving precious resources and my sanity (albeit less precious as companies have often let me know).

Thank you for reading – I hope this offered a fresh perspective on how to use LLMs in your workflow to get more feedback. And now for the shameless plug – we’re all about the feedback if you can’t tell: drop a comment with your thoughts on the article and share any of your experiences using LLMs. Your engagement helps keep these discussions lively and insightful!

2 responses to “LLMs Enable Iterative ML”

  1. Brian Glennon Avatar
    Brian Glennon

    This is such an insightful reflection on how LLMs like GPT-4 are transforming traditional workflows in data science! I love how you emphasized the shift from lengthy, resource-intensive processes to rapid prototyping and iterative design. The point about uncovering implicit requirements during team discussions is particularly resonant—it highlights how these tools foster collaboration and sharpen the focus on user needs. It’s inspiring to see LLMs not just as technical solutions but as enablers of better design and human-centered approaches. Your experience really showcases the potential for integrating AI into workflows to save time, reduce stress, and ultimately deliver more user-aligned products. Thanks for sharing this perspective!

    Liked by 1 person

    1. Matt Machado Avatar
      Matt Machado

      Hey Brian — thanks for reading this article, hope it was insightful!

      I completely agree that using LLMs are changing our workflows, I was just so shocked to see these models help just as much in my team communications and project definitions (plus the exciting features and performance everyone chats about is fantastic as well)

      Like

Leave a reply to Brian Glennon Cancel reply