One Too Many Late Nights on Prospect Research. “So I Built This”

Funny thing is… it actually turned out way simpler than I thought it would be

It was around 9 PM that day — I think it was a Tuesday, not 100% sure.

Discovery call at 9 AM the next morning. Company I’d never heard of before last week. I had six browser tabs open — their website, a funding announcement, a LinkedIn search for their data team, some analyst piece from 2023 that may or may not still be relevant, and a job posting that hopefully told me something about their tech stack.

An hour later I had three bullet points and a vague sense of unease.

I’ve been on both sides of enterprise tech sales. The commercial side, where you’re the one walking into calls and trying to ask smart questions. And the technical side, where you’re the one building things that solve problems. Sitting there at 9 PM, I thought — this is such a solvable problem. Why is everyone still doing this manually?

That question kind of turned into a weekend project. And that weekend project… now it’s something I actually use almost daily.

The Problem Is Bigger Than One Bad Night

Here’s a number that surprised me when I actually calculated it.

A solutions engineer running three discovery calls a week spends roughly 80 minutes on research before each one. That’s 240 minutes a week. Over a year, that’s close to 180 hours — more than four full working weeks — spent on research that is largely repetitive, mostly manual, and produces wildly inconsistent output depending on how tired you are and how many tabs you’re willing to open.

Manual research per call:     ~80 minutes
Calls per week: 3
Working weeks per year: 48
──────────────────────────────────────────
Time lost to research: ~180 hours/year
That's 4+ working weeks. Every year.

And the time cost is only half the story. The quality problem is quietly worse. What you find depends on which search terms you use, how deep you go, whether you think to check job postings for stack signals. Two engineers researching the same company will come out with different pictures. There’s no baseline, no consistency.

I wanted something that gives everyone the same floor. A brief that takes 60 seconds and covers what you actually need.

What I Built

I called it the Prospect Intelligence Agent.

You type a company name. The AI does the research — public sources, company pages, news, hiring signals, competitive landscape. It comes back with a structured brief. Seven sections. Ready to read in five minutes before your call.

Logical Flow In the App

Brief Output — 7 Sections

Screenshot-1 from the App

Sample output

But honestly, the brief is only half of what I built.

Every brief you generate gets automatically parsed — tools, pain points, stakeholders, competitors, use cases — and saved into a knowledge graph. So after you’ve researched ten accounts, you can ask things like:

“Which of my prospects use Kafka?”
“What’s the most common pain point across my fintech accounts?”
“Which competitors keep showing up?”

That’s not possible when your research lives in a notes doc. It becomes possible when your territory is a connected database.

The Part That Actually Changed How I Think About Territory

I want to spend a moment on the graph piece because I think it gets underestimated.

When you read briefs one at a time, you see accounts one at a time. You know one company has fragmented pipelines. You know another has real-time latency issues. You might not notice that four of your ten fintech accounts share the same pain point — and that’s a pattern you could build a whole strategy around.

Screenshot-2 From The App

Graph Database — Neo4j

The graph makes this visible. A node with four edges coming in means four accounts share that thing. Suddenly you can see your territory rather than just read about it one account at a time.

This kind of insight usually comes from months of account reviews and spreadsheet work. Here it’s a byproduct of generating briefs.

How It Works — The Short Version

The whole stack is free except the AI calls, which cost about five cents per brief.

Python for the code, Streamlit for the web interface, Claude as the AI engine, and Neo4j Aura Free for the graph database. Three tabs in the app — brief generator, a chat interface to query your territory, and the graph visualization.

The interesting design choice was separating brief generation from entity extraction. The first AI call produces the human-readable brief. The second parses it and returns clean structured data — company, tools, pain points, use cases — which gets written into Neo4j as nodes and relationships.

Why two separate calls? Because asking one prompt to write well and extract structured data produces mediocre results at both. Separating them means each can be optimised for one job.

Full technical walkthrough, architecture docs and code:
github.com/rahulsahay123/rahulsahay_repo → Other-Technologies/Prospect-Intelligence

Screenshot-3 From The App

Sample Output

What This Isn’t

This is not a replacement for knowing your accounts. A brief can tell you a company uses a certain tool and has a data pipeline problem. It cannot tell you that the VP of Engineering had a bad experience with a vendor two years ago and is skeptical.

What it does is remove the low-value prep work so you have more time for the high-value thinking. The 75 minutes saved on research is 75 minutes you can spend on strategy, better questions, or just being less stressed the night before a call.

Who Should Care About This

If you run discovery calls regularly — this is worth five minutes to try.

If you manage a team doing discovery calls and want more consistency in how they prepare — this is worth understanding.

If you’re curious what an AI agent actually looks like in a real business workflow — not a demo, not a toy — this is a working example of that pattern.

The Thought I Keep Coming Back To

This problem existed for years before anyone built something for it.

Not because the technology wasn’t there. Not because the problem wasn’t obvious. But because the people who understood the problem couldn’t build the solution, and the people who could build didn’t really feel the problem.

That gap is still everywhere in enterprise software. The people best positioned to close it have genuinely lived on both sides — and can see what’s missing in the middle.

This is my attempt to build something in that space.

Appendix — How this is put together (high level)

If you’re curious how this actually works under the hood, here’s a simple view.

It’s not a heavy system. Just a few pieces working together:

  • a UI layer to trigger research
  • an AI layer to generate the brief
  • a second pass to extract structured entities
  • and a graph database to store relationships

Tool Used And The Cost Summary

Tentative Numbers

What Makes This “Agentic”

A standard AI interaction looks like this:

User question → AI generates text from training data → Response

An agentic workflow looks like this:

User goal → AI plans steps → Uses tools → Evaluates results →
Synthesises across sources → Produces structured deliverable

Final Thoughts

This actually started as a small fix for something I kept running into before every discovery call… and somehow turned into something I now use quite often.
Feels like pre-sales is at a bit of a shift right now. A lot of the repetitive work is slowly getting taken away, and AI is kind of changing how we prepare and go into these conversations. Still early though, lot to figure out.
Also, this is my first time writing something focused on pre-sales… something I’ve not really tried before, so still experimenting a bit here.
I’ve been working around AI and modern data platforms, with Snowflake at the centre of most of it. Learning as I go.
The app is live, but since it runs on API credits, access is limited for now.
If you want to try it out, just drop me a note and I can enable access.
And if you’re building something similar or just thinking along these lines, feel free to DM me on LinkedIn — always up for exchanging ideas.
https://www.linkedin.com/in/rahul-sahay-8573923/

One Too Many Late Nights on Prospect Research. “So I Built This” was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top