We Had Blackboards Once
In 1976, a team at Carnegie Mellon built one of the first systems that could recognize continuous human speech. Not isolated words. Not a small, curated vocabulary. Actual speech — messy, ambiguous, full of noise and human variation.
They called it Hearsay‑II. And the challenge it tackled has a name: an ill‑structured problem. In these problems, evidence arrives unevenly, the solution path isn't known in advance, and any architecture that assumes a clean sequence of steps is guaranteed to fail.
The blackboard model didn't fail. It got stranded. When connectionism took hold in the late 1980s and neural networks began outperforming symbolic AI on benchmark after benchmark, the entire paradigm went with it — not because the blackboard model was wrong, but because the field moved and never looked back. We didn't abandon it because it stopped working. We abandoned it because something else worked better on different problems. The distinction matters now, because we're building systems that face the same class of problems Hearsay‑II was designed for, and we're mostly not using the approach that worked.