Is there consensus on what 'semidet' means in Prolog?

I’m pretty new to Prolog and found an old discussion from 2012 about different ways people define ‘semidet’. There seem to be two main definitions floating around:

  1. A predicate that can succeed one time or not at all
  2. A predicate that succeeds once maximum and doesn’t create any choice points

The second definition is stricter than the first one. From what I read, Dr. Neumerkel preferred the first meaning, while Dr. Wielemaker and Dr. O’Keefe went with the second.

I also saw some database folks use ‘semi-deterministic’ to mean queries returning at most one result group, which is closer to definition number one.

Back then, Dr. Neumerkel mentioned that the actual meaning needed to be decided before they could optimize the call_semidet predicate properly.

What about ‘det’ predicates?

This makes me wonder about deterministic predicates too. Maybe there are two ways to think about ‘det’:

  1. Always succeeds exactly one time
  2. Succeeds exactly one time and leaves zero choice points

The first approach makes more logical sense but you can’t always tell until the computation finishes. The second one is easier to check once you find a solution, but it depends on how Prolog searches (depth-first).

Did the Prolog community ever agree on standard definitions? Why not use different names for these concepts if they’re really different things?

Yeah, the terminology’s pretty fragmented across different Prolog implementations. I’ve worked with SWI-Prolog and SICStus, and most people use the stricter definitions you mentioned - semidet succeeds at most once without choice points, det succeeds exactly once without choice points. This works better with how modern Prolog systems optimize and compile code.

The choice point thing really matters for performance. When the compiler knows a predicate won’t leave choice points, it can do tail call optimization and skip unnecessary stack frames. That’s why systems like Mercury built in precise determinism declarations from day one.

There’s no formal standard here - ISO Prolog doesn’t even define these terms. Most modern textbooks and docs use the stricter interpretation because it’s more useful for reasoning and automated optimization. The confusion mostly shows up in older books and legacy code.

I’ve taught Prolog for years, and this ambiguity trips up students and developers constantly. Most Prolog systems just gave up on logical definitions and went with operational ones instead. Check any implementation docs - they define semidet and det based on what actually happens at runtime, like whether choice points stick around after the first solution. Logical definitions look nice on paper, but you often can’t tell what’s what without running the whole computation. I’ve watched programmers annotate their predicates with intended determinism, then get blindsided when runtime behavior doesn’t match because of implementation quirks or edge cases they missed. What really gets me is how different Prolog dialects use identical terms but implement different semantics. Makes writing portable code a nightmare. We need clearer terminology, but operational definitions are too baked into major systems at this point.

Honestly, it’s a mess because no one standardized this early on. I’ve seen codebases where people mix both definitions in the same project, then wonder why their optimizations break. SWI-Prolog’s manual hints at the stricter meaning but doesn’t spell it out clearly. Mercury got it right by making determinism explicit from the start, but Prolog’s been around too long to fix this now.

This topic was automatically closed 4 days after the last reply. New replies are no longer allowed.