Back to home
AI Assistant

AI privacy

What the model sees, what it does not, and how to opt out per feature.

The assistant runs on Anthropic's Claude API. Privacy is enforced in two layers: data never leaves your tenant boundary, and PII is redacted before any prompt is sent to the model.

Per-restaurant context isolation

Each restaurant lives in its own logical tenant. The assistant only sees data from the restaurant the user is currently in — cross-restaurant queries are rejected at the data-access layer, never delegated to the model. Even if a prompt asks about another restaurant by name, the assistant cannot reach it.

PII redaction

Guest names, contact details and free-form notes are redacted server-side before the prompt is constructed. The assistant operates on stable internal IDs (guest_xyz) and only re-hydrates the human-readable fields when the answer is rendered back to you.

Opt-in for personalised features

Some features genuinely need PII to do their job — for example, drafting a personalised reply to a complaint that names the guest. These features are individually opt-in from Settings → AI → Privacy. Each toggle has a clear description of what data is shared and with which model.

No training
DineOS' agreement with Anthropic prohibits training on customer prompts or outputs. Anthropic's own zero-retention API mode is enabled for all DineOS traffic.