When I started the Wayfair AI externship in March 2026, the goal was straightforward on paper: build AI agents that could track competitor activity, detect trends, and generate content insights for the home goods category team. In practice, it turned into a crash course in agentic workflow design, data pipeline thinking, and the surprisingly tricky problem of getting multiple AI agents to produce output that's actually consistent enough to act on.
The problem
Category teams at e-commerce companies are constantly asking: what are competitors doing? What's trending? What should we do about it? Answering those questions manually takes hours a day. The goal was to reduce that to minutes.
What I built
A multi-agent pipeline using n8n with three main agents:
- Trend Detection Agent — monitors consumer demand signals across home goods categories, pricing shifts, and product launch patterns.
- Competitor Tracking Agent — pulls and normalizes competitor activity into structured signals.
- Content Generation Agent — takes trend and competitor signals and generates draft insights and content recommendations.
All three agents feed into a single live-updating Google Sheets dashboard integrating trend signals, competitive benchmarks, and AI-generated insights in one view.
The hardest part
It wasn't building individual agents — it was making outputs consistent enough across runs to be trustworthy. AI agents that produce different formats on different days are not useful in production. A lot of my work ended up being prompt engineering and output normalization to get structured, reliable data flowing downstream.
Takeaway
The interesting design question isn't "can I automate this?" — it's "what does the output need to look like for a human to actually trust and act on it?" That question drove every decision in this project.