
Responsible, human-centered chatbots for teaching, services, and civic problem-solving—built with safeguards by default.
On this page:
Overview ·
Persona Library ·
GMBMB Example ·
MidLife College Example ·
Safeguards & Ethics ·
How We Build ·
Impact ·
FAQ ·
Get Started
What are Chatbot Personas?
RIL’s Chatbot Personas project creates safe, purposeful assistants for classrooms, campuses, nonprofits, and civic teams. Each persona is a use-case bundle: clear goals, audience, voice & tone, allowed/blocked behaviors, data use rules, and evaluation prompts—so deployments are fast and responsible.
- Aligned to learning or service outcomes (not novelty)
- Privacy-aware by default (no PII collection for basic use)
- Instructional guardrails, refusal behaviors, and escalation paths
Persona Library (Examples)
Study Coach (AI Literacy)
Guides students to structure tasks, plan study sprints, and reflect—without writing work for them or enabling cheating.
- Goal: metacognition & planning
- Blocks: full assignment completion, citations fabrication
- Escalation: faculty resources, writing centers
Resource Concierge (Student Services)
Helps students discover food, housing, and benefits—pairs perfectly with Lurch for real-time local resources.
- Goal: fast, discreet resource navigation
- Data: no name/SSN; location optional; clear privacy notice
- Escalation: campus help desks & community orgs
Civic Brief Maker (Civic Tech)
Produces accessible summaries and talking points from public materials, with bias checks and source transparency.
- Goal: informed participation & access
- Requires: linkable public sources; no private data
- Safety: neutrality prompts + citation requirements
Custom Chatbots: GMBMB (Bias Lens)
- Goal: build practical bias awareness and better prompts
- Guards: refusal patterns for sensitive topics, citation requirements
- Data: no PII required; clear privacy notice and opt-out
- Escalation: instructor or team lead for complex judgments
Midlife College (Chatbots in Action)
Midlife College is our working demo of persona-driven learning: short video lessons plus interactive chatbot reflections guided by distinct voices — Lucy (critical-thinking hacks), Dave (calm, approachable learning), and Jack (mentor who pushes deeper questions). It shows how tuned personas can make AI education engaging, ethical, and human.
Safeguards & Ethics (Built-In)
Personas ship with configurable guardrails so teams can deploy quickly without compromising trust. Our Safeguards and INNOVATE framework steps are threaded through design, testing, and iteration.
- Refusal & redirection patterns for unsafe or off-scope asks
- Privacy notices, minimal data collection, and retention choices
- Bias checks, source transparency, and accessible language defaults
- Human escalation paths and simple disable/feedback controls
How We Build Personas
We co-design with stakeholders, prototype in days, and evaluate with real users. Then we improve and redeploy—our DID mantra: Deploy → Improve → Deploy again.
Prototype
Ship a working persona in 3–5 days—voice, guardrails, prompts, and flows.
FAQ
Can we adapt an existing persona?
Yes. We’ll align goals, voice/tone, guardrails, and data rules to your context, then pilot with your users.
Do personas replace human support?
No. They augment staff and educators by handling repeatable tasks and triage, with clear escalation to humans.
What about privacy and safety?
We avoid collecting PII for basic use, provide clear notices, and include refusal/redirect behaviors for risky requests..
Co-Design a Persona with RIL
Pick a use case, define outcomes, and launch a responsible assistant your community can trust.
