← Essays & Musings
Op-Ed  ·  7 May 2026  ·  ~7 min read

The Lemniscates of Reason

Against the cult of the specialist — and why AI has already picked the other side.

A figure 8 has two loops crossing at one point. Specialism trains you to run one loop forever. The renaissance frame the academy mocks is the only one that traces the whole curve — and AI is about to make that the only frame that scales.

The Orthodoxy We Mock the Other Tribe With

There are two priesthoods in American intellectual life, and they pretend not to be the same religion. One sits in the academy, where being recognized as the world’s expert in some narrow corner is the highest possible honor. The other sits in Silicon Valley, where the founder-engineer who can ship one perfect kind of system is treated as a kind of sage. They mock each other across the cultural divide, but they share a single doctrine: the highest form of intellectual accomplishment is to go deep in one domain, and to be quietly suspicious of everyone who does not.

The doctrine has a name. It is the cult of the specialist. The renaissance frame — the polymath who learns the structural shape of multiple fields — is treated as quaint, dilettantish, suspicious. To be widely curious is to be diluted. To be cross-disciplinary is to be unreliable. The first thing your dissertation committee will check is whether you have sufficiently narrowed.

I want to say plainly that this doctrine is wrong, that it has been wrong for a long while, and that the AI moment is about to make it look indefensible.


The Lemniscate

Bernoulli’s lemniscate is the figure-eight curve. Two loops, crossing at one point — the origin. From far away it looks like two circles. Up close it is one continuous trace that turns back on itself.

It is an almost embarrassingly direct picture of how knowledge actually works.

Each loop is a domain. Number theory is a loop. Immunology is a loop. Materials science is a loop. Climate dynamics is a loop. There are thousands of loops. Specialism trains you to run around your loop, faster and faster, knowing every micro-feature of its curvature and every named theorem along it. After a long enough time you can run with your eyes closed.

But the lemniscate has a self-crossing. The two loops meet at the origin. And it is that crossing — not the running — that is the structure. A specialist who never visits the crossing never realizes there is a second loop at all. The cross-disciplinary thinker traces the whole figure: through the first loop, across the crossing, around the second loop, back across the crossing, home. They see the figure as one curve.

The cult of the specialist treats the crossing as a place you fell off the path. Renaissance reasoning treats the crossing as the only place where the path was actually showing you what it was.


Three Years

For three years, almost continuously, I have been doing AI-assisted research and coding across roughly fifteen technical domains. Materials science, immunology, climate dynamics, neuroscience, quantum information, mathematical analysis, formal proof in Lean, signal processing, and others. There is a portfolio of patents, manuscripts, machine-checked proofs, and dynamical-system simulations that came out of those thirty-six months. The receipts are public.

I am not citing this to brag. I am citing it because the experience has taught me something the academy does not teach and Silicon Valley actively disbelieves: an AI system, even a very good one, cannot do useful cross-domain work for you unless you bring something specific to the table that has very little to do with how much you know.

That something is your ontology and epistemology of the field. The structural shape of what its objects are, and the rules of evidence for what counts as a real claim about them. It is the only thing the model cannot supply for you, because it is the only thing the training data does not contain in usable form.


What AI Actually Hands You

When you query an AI system about a domain, what you get back is the median of its training distribution. That is the honest engineering description. The training median is what most people, most of the time, in most published material, would say about that thing. It is competent. It is well-formed. It is rarely wrong about anything well-known.

And it is almost never insightful.

This is a property of the architecture, not a defect. A model trained on a vast quantity of human-written material will, by construction, output the central tendency of that material. It will hand you the consensus view of immunology in 2023. It will hand you the consensus view of materials science in 2023. It will hand you the consensus view of dynamical systems in 2023.

It will not hand you the second-loop crossing where immunology and dynamical systems are the same curve. The training data does not contain that crossing — or contains it so faintly that the median washes it out. Cross-domain links exist in the literature, but they are sparse. The model’s gradient sees them as noise.

Models are getting better, and the cross-domain links are getting denser, slowly, as the training corpus grows and as more interdisciplinary work gets published. But the rate of improvement on within-domain depth is much faster than the rate of improvement on cross-domain insight. The orthogonal direction — the lemniscate-crossing direction — is the slowest-improving capability of every frontier model I have used.

Median is not insight. It is competence at the consensus.

The Ontology Is the Guardrail

Here is the practical consequence.

If you arrive at an AI system without a guardrailed conception of the field you are working in — without a clear sense of what counts as a real object and what counts as evidence — the model will fill that vacuum with the training median. Its tone will be confident. Its citations will be plausible. Its conclusions will be the average of what your field has been saying for the last decade. And you will not be able to tell the slop from the signal, because you have nothing to measure either against.

If you arrive at an AI system with such a conception — if you can say "in this field the real objects are X, Y, Z; evidence for a claim about X has to look like A, B, C; everything else is rhetoric" — then the model becomes a 10x amplifier. You ask it for variations on something you already understand structurally. You ask it to find places in the literature where your conception breaks. You ask it to mechanize a tedious step you can verify. You catch the slop because you can see it as slop.

The ontology is the guardrail. Without it, you are downstream of the median. With it, you are using the model the way a skilled craftsman uses a power tool: amplifying what you already had, ten times faster.

And here is the move the cult of the specialist is going to find hardest to stomach: the guardrail is exactly what the cross-disciplinary thinker has and the pure specialist does not. A specialist has the ontology of one loop. A cross-disciplinary thinker has the ontology of multiple loops and the structural relations that link them. When AI hands them the median of any one loop, they can immediately check it against the structural relations they already know hold across loops. They have a triangulation the pure specialist does not have.


The Lemniscate Already Had Two Loops

None of this is an attack on specialists. Specialists who push to the actual frontier of their loops still produce work AI cannot replicate, and they should be honored for it. The argument is structural, not personal. It is about the cult — the social default that treats specialism as the only legitimate move and the polymath as a self-indulgence — not about the people who are doing extraordinary things on a single loop.

But the cult is going to lose its authority, and I think it is going to lose it quickly, because the cognitive style the cult worships is exactly the cognitive style AI commoditizes most easily. Encyclopedic recall in a narrow domain. Technique mastery in a narrow domain. Synthesis of consensus in a narrow domain. All of that is what large language models are best at, today, with margins still widening.

Cross-disciplinary thinking with a strong ontological anchor is exactly the style AI cannot replace and can only amplify. And the amplification factor is not small. With the right guardrail, one person doing the renaissance trace across five fields can outproduce a small institute that has divided the same five fields among five specialists who never visit the crossings.

The doctrine has it backwards. The polymath was never the diluted version of the specialist. The polymath was always the version that traces the whole figure, and the specialist was the version that ran one loop until the legs gave out.

The lemniscate already had two loops. We just stopped tracing the whole figure, because three generations of academic and corporate hiring told us the second loop was unserious. AI is going to make tracing the whole figure obvious again. It is going to make the people who can actually do it — with ontology, with epistemology, with the discipline to recognize the median when it appears — the only people whose work will not be commoditized.

If that sounds like a cult-of-the-specialist polemic from a polymath, that is because it is. The honest thing is to admit it. Difficulty isn’t depth, and depth isn’t mystique. The grammar is simple. The figure is a lemniscate, and you only ever needed two loops.

Posted 7 May 2026. Pre-publication scrub: pending Crackpot-Index review (provocative title in §1; explicit recovery moves in §6). Comments: [email protected].