The Starting Point
In late January 2026, a B2B SaaS client, an inventory management platform serving small and medium-sized retailers — came to us with a specific problem. They had strong Google rankings for several target keywords (top-5 for "inventory management software SMB"), but when their target buyers queried ChatGPT or Perplexity for inventory management advice, competitors — not them — were being cited.
Initial audit findings:
- Zero presence in Bing Webmaster Tools (never submitted)
- GPTBot blocked in robots.txt (legacy blanket block from 2023)
- No Article or FAQPage schema on any content pages
- Author attributed to "Marketing Team", no named authors, no author pages
- Top pillar article: 1,200 words with answers buried in narrative paragraphs
These were all fixable problems. The 30-day sprint addressed them in priority order.
Week 1: Technical Foundation
We spent the entire first week on technical groundwork before touching a single word of content. This sequencing is intentional — content improvements are worthless if AI crawlers can't access the pages.
- Day 1: Removed the GPTBot and PerplexityBot blocks from robots.txt. Verified via Google's robots.txt tester that no wildcard rules were catching other AI user agents.
- Day 2: Created and verified the client's Bing Webmaster Tools account. Submitted their XML sitemap. Ran Bing's URL inspection on their top 10 pages — 8 of 10 were not indexed by Bing despite all being indexed by Google.
- Day 3: Implemented IndexNow via their WordPress plugin. Published a test article and confirmed Bing indexed it within 4 hours (vs. their typical 2-week indexation lag).
- Days 4–5: Server performance audit. Their LCP was averaging 3.8s due to an oversized hero image on article pages. Optimized image format (switched to WebP), reduced file size by 68%, brought LCP to 1.9s.
- Days 6–7: Verified GPTBot visits in server logs 6 days after the robots.txt fix. Confirmed PerplexityBot visits began on Day 5. Technical foundation confirmed active.
Week 2: Content and Schema
With technical access confirmed, we focused on their highest-value content piece: a 1,200-word article titled "What Are Inventory Control Frameworks?" — the exact topic ChatGPT was citing competitors for.
- Days 8–9: Rewrote the article to 2,800 words. Added a TL;DR block, restructured all sections to answer-first format, embedded specific data points throughout (average inventory turnover rates by retail category, specific framework adoption rates from industry surveys), and added a comparison table of four major inventory frameworks.
- Day 10: Implemented Article schema with full author details. Created individual author pages for their two primary content writers — named, with photos, LinkedIn links, and Person schema.
- Day 11: Added FAQPage schema to the rewritten article with 6 Q&A pairs, each covering a distinct angle on inventory frameworks. Validated all schema through Rich Results Test — zero errors.
- Days 12–14: Published two supporting cluster articles (1,400 words each) targeting narrower subtopics: "FIFO vs LIFO Inventory Methods" and "How to Calculate Reorder Points." Interlinked all three with the pillar article. Added Article and FAQPage schema to both.
Week 3: FAQ Expansion and Entity Building
By Week 3, server logs confirmed GPTBot was visiting the rewritten pillar article. We expanded the FAQ strategy and worked on entity recognition.
- Days 15–17: Added FAQ sections to their next 8 highest-traffic articles — not rewriting them, just appending 4–5 Q&A pairs with FAQPage schema. Targeted questions we'd seen their competitors being cited for in ChatGPT.
- Day 18: Claimed and completed their Crunchbase profile. Updated LinkedIn Company page with consistent company description, founding year, and website URL matching their Organization schema. Added sameAs links to both profiles in their Organization schema.
- Days 19–21: Published an original data piece: "2026 SMB Inventory Management Benchmarks" — a 2,200-word report based on a 150-response survey they'd conducted internally but never published. Original data is a high-value citation target. Added DataCatalog and Dataset schema to the page.
Week 4: Monitoring and Gap Analysis
- Days 22–24: Systematic citation monitoring. Queried 30 target questions in ChatGPT, Perplexity, and Gemini and recorded which sources each cited. Built a spreadsheet tracking: query, cited sources, client citation (yes/no), gap notes.
- Days 25–27: Gap filling. For 12 queries where competitors were cited but the client wasn't, we identified the specific content differences. In 9 of 12 cases, the competitor had a dedicated article on that specific subtopic; the client didn't. We prioritized and published 3 of those articles in the remaining days.
- Days 28–30: Final citation audit. Updated dateModified on the rewritten pillar article to reflect the week 3 additions. Submitted updated sitemap to Bing. Ran final server log check to confirm all key AI crawlers remained active.
30-Day Results
The pillar article became ChatGPT's primary citation for "inventory control frameworks for small business" by Day 27, confirmed by repeating the query 8 times with different phrasing variants.
Replicable Lessons
- Technical fixes first, always. The robots.txt fix on Day 1 was the prerequisite for everything else. Content improvements delivered to an unindexed page produce zero citations.
- Bing is not optional. 8 of their 10 key pages weren't in Bing. That was the primary reason ChatGPT wasn't citing them despite strong Google performance.
- FAQPage schema is the fastest win. Adding FAQ schema to existing articles (without rewriting them) produced measurable citation improvement within 3 weeks — faster than content rewrites.
- Gap analysis beats guesswork. Systematically identifying what competitors are cited for, then publishing better versions of that content — is more efficient than creating new topics speculatively.
- Original data punches above its weight. The benchmark report generated citations disproportionate to its production cost. Original data is an asymmetric citation opportunity.
