‘Generative AI can draft, summarise, and reason through hard cases with an interpretive sensitivity that often seems recognisably human. Yet the same systems also produce errors that, if made by a human lawyer, would look like sabotage – most notably, hallucinating legal authority by fabricating cases, quotations, or doctrinal propositions. This essay explains that duality and draws out its implications for both judicial practice and jurisprudential theory. Part I surveys emerging evidence that large language models can apply legal rules flexibly, sometimes tracking not only the letter but also the spirit of the law in ways similar to lay and professional survey participants. Part II examines the distinctive risks of deploying such systems as writing assistants, focusing on hallucination, verification fatigue, and the institutional dynamics that make “unforeseeable” failures more likely under workload pressure. Part III offers a conjecture: because LLMs are optimised for next-token prediction, they tend to answer legal questions as if asked what the law would most naturally say, rather than what the law actually says – producing maximal cohesion even when correctness fails. That pathology, I argue, also furnishes a potent counterexample to the poststructuralist picture of law as radically indeterminate. The essay concludes by urging a pivot away from AI-as-drafter toward tools that exploit pattern-recognition without inviting fabricated authority.’