Skip to content

Factor Graph for Pose Estimation

Slide: Overview of factor graphs in pose estimation, emphasizing their benefits over Kalman Filters for handling complex dependencies. Covers dynamic Bayesian networks, information forms, and smoothing for efficient state estimation.

Kalman Filter

Markov Chain

The classic Kalman Filter corresponds to a simple Markov Chain:

Core Assumptions:

  • Markov Property: The current state xk depends only on the immediate previous state xk1 and input uk
  • Conditional Independence: Given the current state xk, the observation yk​ is independent of all other states and observations.

Scenario 1: Spatio-Temporal Constraints

A robot revisits a location, observing the same landmark l at two different times, k1​ and k2:

xk1obs. lxk1+1xk21xk2obs. l

These two observations create a direct constraint between pose xk1​​ and pose xk2​​. In the Kalman Filter, this connection is not direct. We need a way to directly link xk1 and xk2.

Scenario 2: Physical Constraints

When tracking multiple objects, they may be subject to physical interaction constraints

Object A: x0Ax1Ax2Ainteract with BObject B: x0Bx1Bx2Binteract with A

The Kalman Filter, designed for a single Markov chain, cannot natively represent this cross-object dependency.

Factor Graph

Dynamic Bayesian Networks

Dynamic Bayesian Networks (DBNs) provide a more flexible framework than a simple Markov chain for representing probabilistic dependencies across time.

Extensions:

  1. Arbitrary Time-Span Dependencies: States can depend on states several steps back.
  2. Complex Inter-Variable Constraints: Multiple variables within a time slice can be interconnected.
  3. Hierarchical State Representations: States can be decomposed into sub-states with their own dependencies.

Dynamic Bayesian Networks (cont.)

It is a bipartite graph consisting of two types of nodes:

  1. Variable Nodes: Represent the unknown quantities we wish to estimate.
  2. Factor Nodes: Represent a constraint or a measurement on the set of variables they are connected to.

Goal: Find the most probable assignment of the variables that maximizes the product of all factors.

X=argmaxXifi(Xi)

Under Gaussian assumptions, this becomes a nonlinear least-squares problem:

X=argminXi||hi(Xi)zi||Si2

Unified View

The Dual Information Form

p(α,β)=N([μαμβ],[ΣααΣαβΣβαΣββ])=N1([ηαηβ],[ΛααΛαβΛβαΛββ])
OperationCovariance FormInformation Form
Marginalization p(α)=p(α,β)dβμ=μαΣ=Σααη=ηαΛαβΛββ1ηβΛ=ΛααΛαβΛββ1Λβα
Conditioning p(αβ)=p(α,β)/p(β)μ=μα+ΣαβΣββ1(βμβ)Σ=ΣααΣαβΣββ1Σβαη=ηαΛαββΛ=Λαα

The Dual Information Filter

Kalman FilterInformation Filter
Prediction Stepμt|t1=Atμt1Σt|t1=AtΣt1At+QtΛt|t1=(AtΛt11At+Qt)1ηt|t1=Λt|t1AtΛt11ηt1
Update StepKt=Σt|t1Ht(HtΣt|t1Ht+Rt)1μt=μt|t1+Kt(ztHtμt|t1)Σt=(IKtHt)Σt|t1Λt=Λt|t1+HtRt1Htηt=ηt|t1+HtRt1zt

Factor Graph with Information Form

The global nonlinear optimization problem

X=argminXi||hi(Xi)zi||Si2

can be linearized at current estimation X0:

hi(Xi)hi(Xi,0)+JiδXi

where:

  • Ji=hiXi|Xi,0 is the Jacobian matrix of measurement function hi
  • ri=zihi(Xi,0) is the residual vector

Factor Graph with Information Form (cont.)

Local Information FormGlobal Information Form
Λi=JiSi1Jiηi=JiSi1riΛ=iAiΛiAiη=iAiηi

where Ai is the selection matrix that maps local variables Xi to the global state vector X.

The optimal update is then:

X=X0+(Λ)1η

From Filtering to Smoothing

Kalman Filter, Information Filter, and Factor Graph are fundamentally solving the same problem: state estimation under Gaussian assumptions. They are probabilistically equivalent.

Despite mathematical equivalence, FG-based smoothing dominates modern applications because:

  1. It naturally encodes arbitrary constraints;
  2. It exploits sparse structure for efficient solving;
  3. It's batch-based update enabling non-linear optimization;
  4. It corrects past states by future evidence.