Chatbot Personas & Ethics — responsible AI literacy tools

Responsible, human-centered chatbots for teaching, services, and civic problem-solving—built with safeguards by default.

What are Chatbot Personas?

RIL’s Chatbot Personas project creates safe, purposeful assistants for classrooms, campuses, nonprofits, and civic teams. Each persona is a use-case bundle: clear goals, audience, voice & tone, allowed/blocked behaviors, data use rules, and evaluation prompts—so deployments are fast and responsible.

  • Aligned to learning or service outcomes (not novelty)
  • Privacy-aware by default (no PII collection for basic use)
  • Instructional guardrails, refusal behaviors, and escalation paths

Persona Library (Examples)

Study Coach (AI Literacy)

Guides students to structure tasks, plan study sprints, and reflect—without writing work for them or enabling cheating.

  • Goal: metacognition & planning
  • Blocks: full assignment completion, citations fabrication
  • Escalation: faculty resources, writing centers

Resource Concierge (Student Services)

Helps students discover food, housing, and benefits—pairs perfectly with Lurch for real-time local resources.

  • Goal: fast, discreet resource navigation
  • Data: no name/SSN; location optional; clear privacy notice
  • Escalation: campus help desks & community orgs

Civic Brief Maker (Civic Tech)

Produces accessible summaries and talking points from public materials, with bias checks and source transparency.

  • Goal: informed participation & access
  • Requires: linkable public sources; no private data
  • Safety: neutrality prompts + citation requirements

Custom Chatbots: GMBMB (Bias Lens)

Give Me Back My Bias (GMBMB) is our product engine for building bias-aware custom chatbots. Instead of hiding chatbot personalities, we design each persona with adjustable “ABC sliders” — Attitude, Bias, and Creativity — so users can tune the experience to fit their needs. Our take? Creating bias is where the magic happens. It’s both a demo of what responsible AI can look like and a working toolkit that powers our projects, from classroom assistants to activist voices.

  • Goal: build practical bias awareness and better prompts
  • Guards: refusal patterns for sensitive topics, citation requirements
  • Data: no PII required; clear privacy notice and opt-out
  • Escalation: instructor or team lead for complex judgments

Midlife College (Chatbots in Action)


Midlife College — Critical Thinking & AI Certificate

Midlife College is our working demo of persona-driven learning: short video lessons plus interactive chatbot reflections guided by distinct voices — Lucy (critical-thinking hacks), Dave (calm, approachable learning), and Jack (mentor who pushes deeper questions). It shows how tuned personas can make AI education engaging, ethical, and human.

Safeguards & Ethics (Built-In)

Personas ship with configurable guardrails so teams can deploy quickly without compromising trust. Our Safeguards and INNOVATE framework steps are threaded through design, testing, and iteration.

  • Refusal & redirection patterns for unsafe or off-scope asks
  • Privacy notices, minimal data collection, and retention choices
  • Bias checks, source transparency, and accessible language defaults
  • Human escalation paths and simple disable/feedback controls

How We Build Personas

We co-design with stakeholders, prototype in days, and evaluate with real users. Then we improve and redeploy—our DID mantra: Deploy → Improve → Deploy again.

Co-Design

Define goals, users, success criteria, and constraints with your team.

Prototype

Ship a working persona in 3–5 days—voice, guardrails, prompts, and flows.

Evaluate

Measure outcomes, run bias/robustness checks, and collect user feedback.

25+
teaching & service personas
100+
students trained on AI literacy
3–5 days
to a usable pilot

FAQ

Can we adapt an existing persona?

Yes. We’ll align goals, voice/tone, guardrails, and data rules to your context, then pilot with your users.

Do personas replace human support?

No. They augment staff and educators by handling repeatable tasks and triage, with clear escalation to humans.

What about privacy and safety?

We avoid collecting PII for basic use, provide clear notices, and include refusal/redirect behaviors for risky requests..

Co-Design a Persona with RIL

Pick a use case, define outcomes, and launch a responsible assistant your community can trust.