A note on a class of practitioners who stay smaller than they should.
A working draft. The problem feels real to me after watching it for two years. I want to know whether it lands the same way for you — and where you'd push back.
- The pattern, described abstractly enough to apply across several professions.
- The two paths most practitioners attempt, and the structural reason each fails.
- A canonical case from two years of conversation.
- A product hypothesis: three principles, not five features.
- What's been validated, what hasn't, and what we'd need from collaborators.
1 · The pattern
There is a class of practitioners whose expertise is the entire reason a customer would choose them, and whose practices nevertheless stay smaller than they should be. They are not bad at their work — many are unusually good. They are not lazy or technophobic. In most cases they have, at some point, tried.
What they tend to share:
- The work itself is the product. The customer is not paying for a brand. They are paying for the practitioner's specific judgement on a hard case.
- The expertise is hard to compress. The decisions that make them good can take years to explain to a colleague — and seconds to misunderstand at the hands of someone who isn't one.
- They have a real practice. Clinical hours, a courtroom calendar, a cellar, a lab. Not a content channel.
The customers who would most value such a practitioner often do not know they exist. Word of mouth carries them part of the way. It does not scale beyond a circle. So at some point, almost all of them try to fix it.
2 · The two paths most try, and why each fails
There are essentially two options, and most practitioners attempt one or both, in some order.
Path 1 — DIY tools
Taplio, OpusClip, Castmagic, dozens of others. The practitioner uses AI assistance to write or edit content themselves. The tools work mechanically: posts get drafted, clips get cut, schedules fill. The trouble is structural, not technical. These tools assume the practitioner already knows what to make, and that the act of making it is something they enjoy. Both assumptions tend to be wrong. The practitioner knows what their work is, not what content is, and the act of making content sits adjacent to their actual job rather than inside it. After three weeks they stop, and the subscription becomes a graveyard.
The clean way to put it: DIY doesn't sustain.
Path 2 — Agencies and freelance social-media specialists
Outsource the whole thing. Pay $500–$2,000 per month. The agency staff is fluent in marketing but rarely fluent in the practitioner's specialty. Microimplant cases get reduced to before-and-afters. International-tax expertise becomes a thread of generic LinkedIn aphorisms. Lab work becomes a moodboard. The practitioner sees the output and recognises that whatever was specific about their work has been drained out of it. They cancel.
The clean way to put it: Agencies don't translate.
These are not the same problem. Most practitioners try both, in some order, and stop trying after the second one breaks.
Agencies fail on translation.
3 · A canonical case
A version of this has played out for a friend over the past two years. He is one of a handful of orthodontists in Tashkent specialising in clear aligners and microimplants. He runs his own lab. He is the kind of practitioner who other doctors send hard cases to. Almost no one outside that network knows him.
He has tried the first path. He couldn't sustain the process; if it isn't enjoyable, he won't keep showing up for it, and the process wasn't.
He has tried the second. The specialists he hired couldn't translate his work. What made him him got flattened on the way to the post.
The two of us also attempted a version of a tool together once before — outside the scope of this document, and before this document existed. It didn't ship. I was employed full-time, and the work needed more focus than I could give it. The constraints are different now.
The conversation has repeated, with small variations, for close to two years. The problem hasn't moved. The two paths still fail in the same two ways. He still has a lab and a clinical depth that few in the city can match. And the patients who would choose him for that depth still don't know him.
about the same problem
(DIY · agency)
translate his work
he tried before cancelling
One person's numbers. The validation step (below) tests whether they generalise.
4 · The product hypothesis
A third path. Not a tool that assumes the practitioner is a marketer, and not an agency that doesn't understand the practitioner's specialty. A platform organised around three principles, not five features.
- Knows the work. The platform learns the practitioner's specialty over time, not generically. For an orthodontist running aligners and microimplants it asks about specific case types, specific clinical decisions, the lab side, the rare-skill side. The intake is technical, not lifestyle. (This is what the agencies fail at.)
- Translates without dumbing down. Turns technical work into content that's interesting without becoming "lifestyle medicine". The kind of post a serious patient reads and thinks this is the practitioner I want.
- Sustainable by design. The process is enjoyable enough that the practitioner keeps doing it after week three. Voice notes between patients, not two-hour shoots. AI does the editing — the practitioner stays in their expertise zone. (This is what the tools fail at.)
Get any one of the three wrong and the product breaks the same way the first two paths break.
What it might look like
· look at the lens ·
5 · What we've validated, and what we haven't
What the two years of conversation make defensible
- One specialist has tried both paths and abandoned both, with reasons that are structural rather than personal.
- The horizontal tools currently on the market (Taplio, OpusClip, Castmagic, others) all sit on the DIY side of the diagram. None claim to "know your work".
- Through this exchange we've been introduced to dentist contacts in Berlin who could be the first cross-validation cohort outside the CIS market.
What hasn't been validated yet
- Whether N>1 specialists describe the same bind in similar words. (One person doesn't make a market. The validation plan is to find out.)
- Willingness to pay at the imagined price (somewhere around $300–500 per month, comparable to a part-time SMM specialist).
- Anything about a working product. There is no MVP yet.
6 · Why now, and why this team
- AI tooling makes the "domain-aware translator" newly possible. A year ago, doing the listening + planning + editing layer at scale required humans, and humans were the source of the failure. Today, the right composition of LLMs and video models can carry the translation layer.
- Vertical-specialist tools is uncrowded. Existing tools target horizontals — creators, marketers, founders. None speak to the technically-deep practitioner.
- Open-source posture matches the buyer. Specialists value transparency, control over data, audit-ability of how AI represents their case work and their patients' likeness.
- The design partner is committed. Two years of conversation. He's willing to be the first user, the harshest critic, and the source of the domain knowledge the platform has to learn.
- Location. Tashkent gives direct access to Russian- and Uzbek-speaking specialist markets where competition for these tools is weak and saturation is low. European introductions through the collaborator network give a second front to validate against.
7 · What we'd need from collaborators
Concrete asks (not "let's chat")
- 3–5 dentist intros in Berlin, for 30-min discovery calls. No commitment expected from them. The goal is to see whether the bind we hear in Tashkent holds in DACH.
- 30-min architectural review of the open-source structure, once there's a repo to review.
- Sounding-board call once a month for the first three months. Critique and direction, not implementation.
- (Future, only if it makes sense) Help thinking through European go-to-market, once there's an MVP and a validated wedge.
This is not a co-founder ask, a funding ask, or an open-ended commitment.
Status & timeline
If the data doesn't hold at any of those checkpoints, the project ends. The discipline matters more than the project.