Probability Evaluation

ProvSQL computes probabilities by reducing a provenance circuit to Boolean form and then dispatching to one of several evaluation methods. This page explains the dispatch architecture, gives the background on the central data structures (d-DNNF, Tseytin encoding, weighted model counting, tree decomposition), and ends with a step-by-step guide for adding a new method. See ../user/probabilities for the user-facing description of the existing methods.

Architecture

The entry point is the SQL function probability_evaluate, which calls provenance_evaluate_compiled on the |cpp| side. That function builds a BooleanCircuit object from the persistent circuit store and then calls probability_evaluate_internal (in probability_evaluate.cpp).

probability_evaluate_internal receives the method name as a string and dispatches via a chain of if / else if branches. There is no registration mechanism -- methods are hardcoded in the dispatcher.

Background: d-DNNF, Tseytin, Knowledge Compilation

Computing the probability that a Boolean formula evaluates to true when its variables are assigned independently is #P-hard in general, but tractable for structured representations. The two structures ProvSQL exploits are d-DNNF and tree decomposition, both of which give linear-time probability evaluation in the size of the structure. The methods that ship with ProvSQL all reduce to one or the other.

d-DNNF

A deterministic decomposable negation normal form (d-DNNF) is a Boolean circuit built from AND, OR, NOT, and variable leaves, satisfying two structural properties:

  • Decomposability: for every AND gate, the variable sets of its children are pairwise disjoint. This means the children represent independent events, and the probability of the AND is the product of the children's probabilities.
  • Determinism: for every OR gate, the children represent mutually exclusive events. This means the probability of the OR is the sum of the children's probabilities -- no inclusion- exclusion correction is needed.

Together these two properties make a single bottom-up traversal sufficient to compute the probability: dDNNF::probabilityEvaluation does exactly that. The implementation in dDNNF.cpp uses an explicit stack instead of recursion to avoid blowing the call stack on very deep circuits, and memoises intermediate results so that shared sub-circuits are evaluated only once.

A general Boolean formula is not a d-DNNF. Producing a d-DNNF from an arbitrary formula -- knowledge compilation -- is the expensive part; once you have it, evaluation is cheap. The compilation and tree-decomposition branches of the dispatcher both end in a dDNNF object that dDNNF::probabilityEvaluation then walks.

Tseytin Encoding

External knowledge compilers (d4, c2d, dsharp, minic2d) and the weightmc model counter all consume Boolean formulas in DIMACS CNF format. Producing CNF from a ProvSQL Boolean circuit is the job of BooleanCircuit::Tseytin (in BooleanCircuit.cpp).

The Tseytin transformation introduces one fresh variable per internal gate of the circuit, then writes a small set of clauses that encode the gate's semantics. For an AND gate g = s1s2∧…∧sn, that's one binary clause gsi) for every child, plus one big clause (g∨¬s1∨¬s2∨…∨¬sn). OR is dual. NOT becomes two two-literal clauses. A unit clause forcing the root variable true is added at the end.

For weighted model counting (and the d4 compiler when built with weight support), Tseytin also emits one w line per leaf variable giving the probability of the corresponding ProvSQL input gate -- so the SAT-side of the pipeline knows the weights to multiply through.

The output is dumped to a temporary file under /tmp; BooleanCircuit::compilation then invokes the chosen compiler with that file and reads the result back.

Knowledge Compilers and the NNF Format

All four supported external compilers (d4, c2d, dsharp, minic2d) produce a d-DNNF in the NNF text format -- a line-oriented representation where each line is one node:

  • L lit -- a leaf literal (positive or negative).
  • A k c1 c2 ... -- an AND of k children, given by their node indices.
  • O k c1 c2 ... -- an OR of k children.

Modern d4 also emits a few extra node kinds (a / o / f / t for constants, and a decision-tree variant); the parser in BooleanCircuit::compilation handles both the legacy and the d4-extended dialects. The result is a BooleanCircuit (with the d-DNNF invariants) that gets returned to the caller and walked by dDNNF::probabilityEvaluation.

WeightMC: Approximate Weighted Model Counting

BooleanCircuit::WeightMC uses a different external tool: the WeightMC approximate weighted model counter. Unlike a knowledge compiler it does not produce a d-DNNF; instead it returns a single approximate probability with statistical guarantees. The user passes the desired precision as a "delta;epsilon" string, which is parsed and turned into a --pivotAC argument that controls how many random hash constraints WeightMC will sample.

The input is again the Tseytin CNF (with weights); the output is a single line that the function parses and returns as a double.

Tree Decomposition

The tree-decomposition path is ProvSQL's "no external tool" route to a d-DNNF. Conceptually, a tree decomposition of a Boolean circuit is a tree of bags (sets of variables) such that every constraint of the circuit is captured by at least one bag, and the bags containing each variable form a connected subtree. The treewidth is one less than the size of the largest bag; the smaller it is, the more amenable the formula is to dynamic programming.

TreeDecomposition.cpp builds a tree decomposition of the circuit's primal graph using a min-fill elimination heuristic, then normalises it (TreeDecomposition::makeFriendly) so that every bag has at most two children and every leaf bag introduces exactly one variable. dDNNFTreeDecompositionBuilder.cpp then walks the bag tree bottom-up, enumerating per-bag truth assignments and gluing them into a d-DNNF whose decomposability and determinism follow from the bag-cover structure of the decomposition. The worst-case cost is O(2w + 1⋅| circuit|), which is why ProvSQL caps the treewidth at TreeDecomposition::MAX_TREEWIDTH (currently 10) and falls back to compilation with d4 when that bound is exceeded.

Currently Supported Methods

Method string Implementation
"independent" BooleanCircuit::independentEvaluation -- exact, linear time when every input gate appears at most once.
"possible-worlds" BooleanCircuit::possibleWorlds -- exact enumeration of all 2n worlds; capped at 64 inputs.
"monte-carlo" BooleanCircuit::monteCarlo -- approximate via random sampling; takes sample count as argument.
"weightmc" BooleanCircuit::WeightMC -- approximate weighted model counting via the external weightmc tool; takes delta;epsilon as argument.
"tree-decomposition" Builds a TreeDecomposition (bounded by TreeDecomposition::MAX_TREEWIDTH) and uses dDNNFTreeDecompositionBuilder to construct a d-DNNF, then calls dDNNF::probabilityEvaluation.
"compilation" BooleanCircuit::compilation -- invokes an external knowledge compiler (d4, c2d, dsharp, or minic2d) to produce a dDNNF, then dDNNF::probabilityEvaluation.
"" (default) Fallback chain: try independent, then tree-decomposition, then compilation with d4.

The branches for "compilation", "tree-decomposition", and the default all funnel through BooleanCircuit::makeDD, which dispatches further on the d-DNNF construction strategy.

The external-compiler choice inside compilation is itself a string dispatch inside BooleanCircuit::compilation. Once a dDNNF has been produced, probability evaluation is a single linear-time pass (dDNNF::probabilityEvaluation), because the d-DNNF structure guarantees decomposability and determinism.

Block-Independent Databases and Multivalued Inputs

The default path through add_provenance and the per-row INSERT trigger allocates one fresh input gate per tuple, so each row of a provenance-tracked base table is an independent Bernoulli variable. That is the tuple-independent probabilistic database (TID) model.

ProvSQL additionally supports the strictly more general block-independent database (BID) model, in which input tuples are partitioned into blocks:

  • tuples within a block are pairwise disjoint -- at most one of them is present in any possible world;
  • blocks are independent;
  • each tuple of a block has its own probability, with the per-block sum  ≤ 1; the residual 1 − ipi is the probability that no tuple from the block is present (the "null outcome").

A TID is the special case where each block has exactly one tuple. BIDs are the natural circuit-level model for tables with key uncertainty: "exactly one of these rows is the real row, we don't know which, and here are the weights".

The gate_mulinput Gate

ProvSQL represents each BID block in the persistent circuit by a group of gate_mulinput gates that share a common child, an input gate acting as the block key. Each mulinput gate corresponds to one alternative of the block and carries its own probability (set with set_prob). mulinput gates are not first-class leaves of the provenance DAG: semiring evaluators do not know how to interpret them and will refuse any circuit that contains one, and the probability pipeline handles them only after rewriting the blocks into standard Boolean gates -- as described below.

The canonical way to create such gates from SQL is repair_key, which takes a table with duplicate key values, allocates one fresh input gate per key group, and turns each member of the group into a mulinput whose child is that block key. When no probabilities are attached, repair_key defaults them to a uniform distribution over the block members.

Rewriting Blocks into Independent Booleans

Most probability-evaluation algorithms require a purely Boolean circuit: AND, OR, NOT, and independent Bernoulli leaves. A BID block is not directly such a structure -- its elements are mutually exclusive, not independent. BooleanCircuit::rewriteMultivaluedGates (in BooleanCircuit.cpp) reduces every block to an equivalent Boolean subcircuit by introducing O(logn) fresh independent Bernoulli variables per block of size n whose joint distribution reproduces the original discrete weights.

The construction is divide-and-conquer. Given a block with alternatives carrying cumulative probabilities P0 ≤ P1 ≤ ⋯ ≤ Pn − 1, the recursive helper BooleanCircuit::rewriteMultivaluedGatesRec splits the range [start, end] at the midpoint mid, creates one fresh input gate g with probability

(Pmid + 1 − Pstart)/(Pend − Pstart)

-- the conditional probability of landing in the left half -- and recurses twice: the left half gets g pushed onto its prefix, the right half gets NOT g. At a leaf (start = end), the mulinput gate is rewritten into the AND of the accumulated prefix, so its truth value becomes the conjunction of the fresh-variable decisions that lead to it. If the block's probabilities do not sum to 1, the outer call wraps the whole construction in one more fresh input of probability Pn − 1 to carry the "none of them" residual.

After rewriting, the block's mulinput gates have been turned into ordinary AND gates over fresh independent Boolean inputs, and the circuit is ready for any TID-based probability method. The dispatcher in probability_evaluate_internal calls BooleanCircuit::rewriteMultivaluedGates lazily: the "independent" method handles mulinput gates natively and runs on the raw circuit; every other method falls through to the rewrite first. This is the pivot point referenced in adding-new-probability-method below.

Shapley and Banzhaf Values

ProvSQL also exposes expected Shapley values and expected Banzhaf values, which quantify the individual contribution of each input tuple to the truth of a provenance circuit. The user-facing interface is described in ../user/shapley; this section covers the implementation in shapley.cpp and dDNNF.cpp.

Expected Shapley values are #P-hard in general but become polynomial-time computable when the provenance is represented as a decomposable and deterministic (d-D) Boolean circuit -- essentially a d-DNNF. The algorithm ProvSQL uses is Algorithm 1 of Karmakar, Monet, Senellart, and Bressan (PODS 2024, DBLP:journals/pacmmod/KarmakarMSB24), specialised to the two coefficient functions that define the Shapley and Banzhaf scores. Both scores are computed in expectation: the random subset of variables is drawn according to the per-variable probabilities of the circuit, and when no probabilities have been set, each defaults to 1 and the computation collapses to the standard deterministic Shapley / Banzhaf value.

Entry Point

shapley / banzhaf (and their set-returning variants shapley_all_vars / banzhaf_all_vars) are thin wrappers that unpack their arguments and call shapley_internal in shapley.cpp. That helper performs the following sequence:

  1. Build a BooleanCircuit from the persistent store via getBooleanCircuit.
  2. Build a dDNNF by calling BooleanCircuit::makeDD. This is the same d-DNNF construction used for ordinary probability evaluation, and obeys the same method / args conventions.
  3. dDNNF::makeSmooth -- ensure that every OR gate's children mention the same variable set. The paper's algorithm assumes a smooth d-DNNF.
  4. For Shapley (but not Banzhaf): dDNNF::makeGatesBinary on AND -- binarise AND gates so each has at most two children. Together, the previous two steps turn the d-DNNF into a tight d-D circuit in the paper's sense.
  5. Call dDNNF::shapley or dDNNF::banzhaf on the target variable's gate.

The Shapley Recurrence

The paper's algorithm conditions the circuit on the target variable being fixed to true (call the result C1) and to false (call it C0), computes a pair of per-gate arrays on each conditioned circuit, and combines them into the final score. ProvSQL's dDNNF::shapley mirrors that structure:

double dDNNF::shapley(gate_t var) const {
  auto cond_pos = condition(var, true);   // C_1
  auto cond_neg = condition(var, false);  // C_0

  auto alpha_pos = cond_pos.shapley_alpha();
  auto alpha_neg = cond_neg.shapley_alpha();

  double result = 0.;
  for (size_t k = ...; k < alpha_pos.size(); ++k)
    for (size_t l = 0; l <= k; ++l) {
      double pos = alpha_pos[k][l];
      double neg = alpha_neg[k][l];
      result += (pos - neg) / comb(k, l) / (k + 1);
    }
  result *= getProb(var);
  return result;
}

dDNNF::condition returns a copy of the circuit in which the target input gate has been replaced by an AND / OR-with-no-children acting as the constant true / false respectively. The private helper dDNNF::shapley_alpha() then performs a single bottom-up pass computing a two-dimensional array βgk,  (called result[g] in the code) at every gate g, where k is the number of variables under g in the current cofactor and is the number of them that are positively assigned. The recurrences follow the IN / NOT / OR / AND cases of Algorithm 1 of the paper:

  • At a leaf, the array encodes the Bernoulli distribution of that single variable.
  • At an OR gate, the arrays of the children are summed coordinatewise (valid because the d-DNNF is smooth, so all children have the same variable set).
  • At a binary AND gate, the arrays are convolved via a double sum over (k1, 1) pairs -- the decomposability of AND makes this the Cauchy product of two independent distributions. This convolution is the reason AND gates have to be binarised before the algorithm runs.
  • A standalone bottom-up pass (dDNNF::shapley_delta()) precomputes the δgk polynomials, which the algorithm uses at NOT gates to turn negation into a coefficient flip.

The final score is pxk, cShapley(k + 1, )⋅(βgoutk,  − γgoutk, ), where βgout comes from C1 and γgout from C0, and cShapley(k + 1, ) = (k) − 1 ⁄ (k + 1) is the Shapley coefficient -- i.e.exactly the formula implemented above. The overall complexity is O(|C|⋅|V|5) arithmetic operations, dominated by the double-sum convolution at AND gates over the |V|2-sized arrays.

The if (isProbabilistic()) guards inside shapley_alpha and shapley_delta short-circuit the polynomials to a single top-level coefficient when all input probabilities are 1, so that the same code path computes classical (deterministic) Shapley values without paying the expected-score overhead.

Banzhaf

The expected Banzhaf value admits a much simpler formula DBLP:journals/pacmmod/KarmakarMSB24:

EScoreBanzhaf(φ, x) = px(ENV(C1) −  ENV(C0))

where ENV(φ) = Z ⊆ VΠV(Z)E ⊆ Zφ(E) can be computed in a single linear pass over a smooth d-D circuit without binarising AND gates. dDNNF::banzhaf runs banzhaf_internal() on the two conditioned circuits C1 and C0 and returns the difference times px; the overall complexity is O(|C|⋅|V|), one factor of |V| less than Shapley. This is why shapley_internal skips the dDNNF::makeGatesBinary call in the Banzhaf branch.

Step-by-Step: Adding a New Probability Evaluation Method

The work is almost entirely in two files. Pick a short, descriptive method string -- it is the value the user passes to probability_evaluate.

  1. Declare the method on BooleanCircuit.h:

    double myMethod(gate_t g, const std::string &args) const;
    
  2. Implement it in BooleanCircuit.cpp. The method receives the root gate and the user-supplied args string (may be empty) and must return a probability in [0, 1]. Check provsql_interrupted periodically if the computation is long so that the user can cancel with Ctrl-C:

    double BooleanCircuit::myMethod(gate_t g, const std::string &args) const {
      // Parse args if needed.
      // Run the algorithm, respecting provsql_interrupted.
      // Return the probability.
    }
    
  3. Add a dispatch branch in probability_evaluate_internal in probability_evaluate.cpp. The exact location depends on the method's characteristics:

    • If the algorithm requires a Boolean circuit (no multivalued inputs -- see bids-and-multivalued-inputs), add the branch after the call to BooleanCircuit::rewriteMultivaluedGates. That is the case for most approximate methods.
    • If the algorithm operates directly on the raw circuit (like independent), add it before BooleanCircuit::rewriteMultivaluedGates.
    • If the algorithm produces a d-DNNF and you want it to benefit from the linear-time d-DNNF evaluator, add it to BooleanCircuit::makeDD instead and route your dispatch branch through BooleanCircuit::makeDD.

    Example for an approximate method that takes a numeric argument:

    } else if(method == "mymethod") {
      int param;
      try { param = std::stoi(args); }
      catch(const std::invalid_argument &) {
        provsql_error("mymethod requires a numeric argument");
      }
      result = c.myMethod(gate, param);
    }
    
  4. (Optional) Extend the default fallback chain. If the method is a good universal choice, update BooleanCircuit::makeDD and/or the default branch in probability_evaluate_internal to try it before falling back to compilation with d4.

  5. Add a regression test under test/sql/ and register it in test/schedule.common. Follow the skip-if-missing pattern from the other external-tool tests (see testing) if the new method depends on an external binary.

  6. Update the user documentation in ../user/probabilities and add a row for the new method to the "Currently supported methods" table above.