How Target Used GenAI to Lift Sales by 9% Across 100K+ Products
LLMs helped Target match the right add-ons, from throw pillows to phone cases -boosting engagement (+11%) and relevance (+12%) without blowing up compute.
Fellow Data Tinkerers!
Today we will look at how Target used GenAI to recommend better products to users.
But before that, I wanted to share an example of what you could unlock if you share Data Tinkerer with just 2 other people.
There are 100+ more cheat sheets covering everything from Python, R, SQL, Spark to Power BI, Tableau, Git and many more. So if you know other people who like staying up to date on all things data, please share Data Tinkerer with them!
Now, with that out of the way, let’s see how Target leverages GenAI for better shopping experience.
TL;DR
Situation
Target needed to improve how it recommended accessory products (e.g., cases, cables, decor) across its massive catalog, especially in the Electronics and Home categories.
Task
Build a scalable, smart “related accessory” recommendation system that accounts for practical fit (e.g., size, compatibility) and aesthetic style (e.g., color, material, vibe).
Action
The Target Product Recommendations team developed GRAM (GenAI-based Related Accessory Model), using Large Language Models to:
Identify important attributes for each item pairing
Score accessories based on those weighted attributes
Match for visual/aesthetic harmony
Scale scoring by evaluating item type pairs instead of all item pairs
Add a human-in-the-loop layer to diversify results and enable cross-category pairing
Result
+11% in engagement
+12% in relevant suggestions
+9% in attributable demand
Use Cases
Improved search relevance, query processing at scale, ranking and personalisation
Tech Stack/Framework
LLM, recommendation system, Gen-AI recommendation system
Explained further
The art of “you might also need…”
Batteries for a toy. A phone case. The just-right end table for a new couch. Shoppers want to know which products pair well and they want the answer now. But giving smart, useful accessory recommendations isn’t easy, especially at Target’s scale. The catalog is massive. Attributes matter differently depending on what you're buying. A parent shopping for a craft kit might care about age suitability. Someone buying linens is probably focused on color and material. Recommending the right companion item depends on surfacing the right attributes at the right time.
That’s the problem Target Product Recommendations team was asked to solve. Specifically, to build a "related accessory" algorithm across two high-volume categories: Electronics and Home.
The solution: GRAM, a GenAI-based Related Accessory Model. Built for the Home category, GRAM uses LLMs to generate scalable, high-quality accessory recommendations that balance both practicality and aesthetics.
Here’s how the team pulled it off.
Three hurdles to getting this right
Accessory recommendation sounds simple until you look at the actual variables in play. The team hit three big technical roadblocks early on:
1. Figuring out which attributes actually matter
First problem: in a product catalog of this size, how do you know which attributes to prioritize?
Manually deciding this for every product pair was off the table, far too slow and inconsistent. So the team leaned on LLMs to analyze product data and surface the most relevant attributes and assign importance weights automatically.
The model treated each product relationship as a pair: a “core” (or seed) item and its potential accessory. It then generated scoring rules using LLMs. These rules looked at attribute overlaps and added up weights accordingly. For example:
If you’re recommending pillowcases for a sheet set, the model prioritizes color and material.
But if you are recommending a book for a kids’ activity kit, the intended audience (infant, child, adult) and brand take the top spots.
Once the rules are defined, every accessory gets a score. The more relevant the match, the higher the score. Sales rank is used as a tie-breaker if scores come out equal.
This first step automated what would’ve taken human experts hours of item-by-item analysis and made the entire approach extensible across the catalog.
2. Matching for aesthetics, not just attributes
The second challenge was less measurable, more subjective: style.
Some product pairs need more than functional fit. They need to look good together. Think matching end tables and lamps or kitchen décor themes. It’s not just about same color = good, it’s about whether the combination feels right.
Turns out, the LLM was surprisingly decent at this too. It used concepts like color harmony and stylistic coherence to produce more intuitive matches.
The result: recommendations that didn’t just match on data but also made visual and design sense. That unlocked a different tier of user experience, one where product suggestions feel like they came from an interior designer, not a search algorithm.
3. Scaling across the entire catalog
Final challenge: scale.
The Home category alone has hundreds of thousands of items. Naively scoring every possible pair would be a compute nightmare. To keep things sane, the team constrained the scope: instead of scoring every individual item pair, GRAM evaluates item type pairs.
So rather than comparing every coffee table with every lamp, the model defines scoring rules between coffee tables and lamps as types. Then it applies those rules to all items in each type. That shortcut massively reduces complexity and lets the team parallelize the heavy lifting.
This tweak made GRAM fast enough to run across the entire Home category and modular enough to be reused elsewhere. The architecture is now in place to extend the model beyond Home, into categories like Electronics, Toys and more.
Blending AI with retail know-how
Even with a strong model, humans still played a crucial role.
The team looped in Target’s site merchants to build a list of commonly co-purchased items. That gave the system a human-approved shortlist of accessories that go well together even across categories.
These additions helped in two major ways:
They enabled smarter cross-category recommendations (like matching an electronics item with a home office accessory).
They diversified the results. Left to itself, the model sometimes recommended very similar items. With the human-in-the-loop layer, it broadened the accessory range dramatically.
In practice, this led to two recommendation modes:
Model-only: Good for shoppers still comparing similar items e.g., “Which pillowcase matches best?”
Model + HITL: Great for basket expansion e.g., “I added a throw pillow, now show me a throw blanket, lumbar bed pillow or decorative glass that complements it.”
Together, these modes gave the system more versatility. Whether someone’s fine-tuning a choice or discovering what else to add, there’s now a smart assist ready.
Let’s talk results!
The team ran a proper A/B test to see if their new recommendation model actually worked. They plugged it into the add-to-cart flyout, that little pop-up box you see when you add something to your cart. It's a great place to suggest extras, like a phone case when you’re buying a phone. Perfect for giving shoppers a little nudge to grab one more item.
The results? Encouraging.
Interaction rate: up ~11%
Guests were more likely to click and explore the suggested items.Display-to-conversion rate: up ~12%
The recommendations were more relevant, leading to actual purchases.Attributable demand: up 9%+
Clear impact on overall sales from the suggestions shown.
Notably, this wasn’t just a UX win. It had a real effect on downstream conversions and revenue.
Wrapping it up
The Product Recommendations team pulled off a clean fusion of GenAI and human expertise with GRAM. They tackled messy problems like attribute prioritization, aesthetic matching and catalog scale with practical solutions that made sense at Target’s size and speed.
The model went into full production in April 2025 and it’s already shaping the way Target recommends accessories to its guests. Whether someone’s shopping for a couch, a bookshelf or a kids' toy, GRAM helps surface what goes well with it intelligently, tastefully and fast.
And the best part? The core design is flexible enough to extend into other categories. So if you're browsing headphones next month and a stylish case pops up just as you're checking out - yeah, that’s GRAM too.
Lessons learned
GenAI isn’t just for chat, it can help with products: LLMs can extract meaningful product attributes and make smart pairings without hard-coded rules.
Style is subjective but GenAI can still spot it: GenAI can “learn” visual harmony and recommend items that actually look good together.
Scale demands smart shortcuts: Scoring item types instead of every item pair made GRAM fast, scalable and production-ready.
Human-in-the-loop made the model better, not slower: GenAI doesn’t always replace domain experts, given the right context, it amplifies them.
The full scoop
To learn more about this, check Target's Engineering Blog post on this topic
If you are already subscribed and enjoyed the article, please give it a like and/or share it others, really appreciate it 🙏
Keep learning
How DoorDash Used LLMs to Trigger 30% More Relevant Results
How do you handle search queries like “low-carb spicy chicken wrap with gluten-free tortilla” at scale?
DoorDash rebuilt its search pipeline to better understand both user intent and product metadata. The result? A 30% increase in relevant results and measurable gains across key engagement metrics.
This post breaks down the hybrid approach they used; combining LLMs, structured taxonomies and real-time retrieval without sacrificing speed or accuracy.
How Uber Cut Invoice Handling Time by 70% with GenAI (Without Ditching Humans)
Uber’s invoices were a hot mess. Thousands of formats, 25+ languages and way too much human copy-pasting. Even with automation, it was chaos. Their solution? a GenAI-powered doc processing system that cut invoice handling time by 70% and slashed costs by 30%.
If you want to learn about an actual example of GenAI being used in practice (rather than just vibes), check this article.
this was insightful. there's been an increase in online shopping and this explains why. also, thanks for tagging their article.