● System Architecture
SARAL v1.3 (Pilot System)
A decision-support probe built to measure how human discretion and algorithmic overrides operate in street-level welfare triage.
v1 (Legacy)
Early prototype. Concept only; not field-tested.
v1.3 (Current Pilot)
Field-deployed in Jan 2026. This version generated all N=260 telemetry and is the basis for all reported findings.
What it does
- Intake: Accepts structured attributes (e.g., age, income, document presence).
- Rule Engine: Evaluates inputs deterministically against formal policy criteria.
- Output: Provides an interpretable recommendation (Eligible, Ineligible, Escalate).
- Visibility: Displays a "rule-trace" showing why a rule fired, alongside conflict cues.
- Experiment: Surfaced an optional, non-binding AI risk score in specific test arms.
What it does NOT do
- No Auto-Approval: It does not make automated final decisions.
- Not a Service: It is not a live, public-facing production system.
- No Outcomes: It does not track long-term poverty alleviation or ultimate welfare disbursement.
Where it sits in the chain
SARAL sits exactly between raw data collection and final human review, functioning purely as advisory software to expose the gap between mathematical rules and practical gatekeeping.
Data & Logging
Captured: Triage latency, final operator actions (Approve/Reject/Escalate), and structured override reason codes.
Excluded: Personally identifiable information (PII) is removed. Free-text notes are heavily sanitized or excluded to protect citizen privacy.
How to read the results
- Schema gap: The system only knows what is in its fields; decisive reality is often hidden in unstructured notes.
- Contextual Gatekeeping: Operators enforce unwritten socio-economic heuristics (e.g., visual wealth proxies), rather than formal rules.
- Non-binding: The operator always has the final say.