LLM, RAG & the Power of Generative AI: A Conversation with Michael Concannon
Blog
AI/ML
January 31, 2024

LLM, RAG & the Power of Generative AI: A Conversation with Michael Concannon

Imagine an online shopping experience that goes something like this:

User: I’m a 50-year-old male who tends to dress conservatively. I’m speaking at a Macy’s data conference on Friday. What should I wear?

Digital Assistant: What’s the weather going to be like on Friday?

User: Moderate.

Digital Assistant: Here are some items that could work for your presentation. A conservative navy suit, a lightweight white dress shirt, comfortable black dress shoes, and a professional blue tie.

From there, the user can ask follow-up questions such as: “Is that shirt stain-resistant? and “How much is this complete outfit using the stain-resistant shirt instead of the linen?

The assistant says: “The prices are as follows: – DKNY Men’s Modern-Fit Stretch Suit Jacket: $360 – Van Heusen Men’s Stain Shield Regular Fit Dress Shirt: $60 – Men’s Tuxedo Cap-Toe Oxford Patent Leather Dress Shoe: $158.99 – Tommy Hilfiger Men’s Classic Textured Stripe Tie: $69.5 So, the total cost of the outfit would be $360 + $60 + $158.99 + $69.5 = $648.49.”

This “conversation” happened between Stylitics’ CTO, Michael Concannon, and a Digital Styling Assistant chatbot leveraging an LLM (large language model) without embellishment or manipulation. It’s further development of a world where humans will continue to trust AI systems for helpful, everyday interaction.

It’s also fueling development of Stylitics’ next generation of retail technology, Inspirational Commerce™.

We sat down with Mike to discuss what these language models mean for retailers, what he’s most excited about at Stylitics, and where he sees the technology headed.

Q: Starting with the example above, did you have to feed the whole catalog into an LLM?

A: No, our chatbot sits between the user and the large language model.  It performs the real time search for product information, availability, and price.  It then augments the responses with the catalog information, user context, and style information.  This model is referred to as RAG, or retrieval augmented generation.

Q: How does the assistant know to stay on topic and on brand?

A: Our system is injecting shopper context, as well as retailer voice and style.  This is possible with a combination of two techniques.  The first is prompt engineering, which is the construction of the proper contextual descriptions around the user input and our data when communicating with the language model.  We can set the stage.  The second is a bit of model fine tuning, which is the ability to refine the model slightly by supplying a reasonable number of ideal responses as positive feedback.

Q: Is it personalized?

A: Yes, the assistant is aware of the shopper context, including purchase history,  browse history, and location.  That data both changes the prompts and can be used to sort search results when finding the most appropriate products for a given shopper.

Q: Looking ahead, what is Stylitics doing with AI and LLMs?

A: This prototype was built to test a concept while educating the team on what is possible.  The models are proving to be very helpful in data validation.  Clean data is critical to so many of our workflows.  It is also helping out data augmentation with additional attributes, providing new insights with additional content generation.

Q: Have you had any conversations with current Stylitics clients to help inform your work with these models?

A: There is strong interest on several levels.  We are eager to both enhance our existing workflows and will be exploring new use cases to enable new inspirational experiences.  These models are able to validate our existing computer vision capabilities, generate new tips and recommendations, augment the data with styles and themes, and provide new shopper journeys.  We will see short term wins as it assists our current workflows and long term wins as it unlocks new experiences.

Q: What are they most looking forward to have the ability to do?

A: In a very short time we have already seen rapid progress on the accuracy and depth of understanding displayed by the models.  We are looking forward to that continued evolution as they get built into our business workflows.  We saw these models start to help individuals as side assistants in 2023 and will now see them more deeply integrated into our systems.

Related Stories: