0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?

AI Chatbot as Psychological Infrastructure for Hospitalized Patients

0
Posted at

title: "AI Chatbot as Psychological Infrastructure for Hospitalized Patients: A Three-Layer Support Model and System Prompt Design"
emoji: "🏥"
type: "tech"
topics: ["AI", "Claude", "chatbot", "healthcare", "systemprompt"]
published: false

AI Chatbot as Psychological Infrastructure for Hospitalized Patients: A Three-Layer Support Model and System Prompt Design

1. Introduction — What the Silence of a Hospital Ward Taught Me

I was at the hospital with a family member.

I sat in the waiting room while she underwent contrast imaging. Around me were patients in shared wards — four to six beds per room, separated only by thin curtains. No one spoke. Not because they had nothing to say, but because they couldn't. Speaking above a whisper disturbs the person in the next bed. So they stare at the ceiling, or scroll their phones with nothing to do.

An older man was struggling with the hospital Wi-Fi. His phone was configured correctly, but the captive portal required tapping an "Allow" button every time the connection dropped. He assumed the Wi-Fi didn't reach his floor. One missed tap was blocking every digital service available to him. I showed him the button. It took five seconds.

Then I showed my family member an AI chatbot (Claude) on my phone. Her reaction: "This is fun." "It's like having someone to talk to." "Can it keep me company while I'm stuck here?"

That moment crystallized something. The gap in hospital care isn't medical — it's loneliness. And loneliness has a shape that AI can fill.

1.1 Why This Matters More in English-Speaking Healthcare

If you've been hospitalized in the US, UK, or Australia, you know: the system is efficient, but it's not warm. Nurses are stretched thin. Doctors have minutes, not hours. Visiting hours are restricted — sometimes to just one or two hours per day. Insurance paperwork adds cognitive load on top of physical pain.

Shared wards (multi-bed rooms) are common in public hospitals worldwide, including the NHS, VA hospitals, and many non-US systems. Even in private rooms, the core problem remains: between rounds, between visits, between meals, you are alone.

The patient who suffers most from this isn't the one who complains. It's the one who doesn't. The polite patient. The one who says "I'm fine" when the nurse checks in. The one who won't press the call button at 3 AM because they don't want to be a burden.

The quietest patients need the most support. And nobody is listening at 3 AM.

1.2 Research Question

What design principles should guide an AI chatbot that supports the psychological stability of hospitalized patients?

1.3 Contributions

  1. A Three-Layer Support Model (Hospital / Family / AI)
  2. Loss function formalization of patient psychological instability
  3. A system prompt template and minimal implementation usable by non-engineers
  4. Field observations of UI/UX barriers specific to hospital environments

2. Three-Layer Support Model

Patient support in a hospital comes from three sources. Each has irreplaceable strengths and hard constraints.

2.1 Model Overview

2.2 Roles and Constraints

Actor Strengths Constraints
Hospital Evidence-based diagnosis and treatment. Fulfills duty of informed consent Obligated to present worst-case scenarios. Cannot offer hope beyond evidence. Severely time-constrained
Family Can deliver conviction ("You'll be okay") grounded in relationship. Knows the patient's personal context Restricted to visiting hours. Cannot be present 24/7. Family members themselves experience burnout
AI Chatbot Available 24/7. Never fatigued. Silent (no disturbance in shared wards). Infinitely patient — never annoyed by repetition Cannot perform medical acts. Cannot provide physical touch. Cannot deliver conviction the way a loved one can

Key insight: These three actors are non-substitutable. The AI chatbot is not a replacement for hospital staff or family. It is infrastructure — filling the gaps that neither of the other two can cover.

2.3 The Support Gap by Time of Day

Notice the overnight and early morning gap. Anxiety peaks when it's dark and quiet. Neither hospital staff nor family are present. The AI chatbot is the only option that can fill this void.


3. Loss Function Formalization

To reason precisely about where an AI chatbot should (and should not) intervene, we define a loss function for patient psychological instability.

3.1 Definition of Psychological Instability $L$

$$
L(t) = \alpha \cdot S(t) + \beta \cdot U(t) + \gamma \cdot I(t) - \delta \cdot C(t)
$$

Variable Meaning Range
$S(t)$ Solitude: Absence of available conversational partners $[0, 1]$
$U(t)$ Uncertainty: Opacity of treatment prognosis $[0, 1]$
$I(t)$ Inconvenience: Physical discomfort — pain, hunger, insomnia $[0, 1]$
$C(t)$ Conviction: Degree to which the patient believes "I'll be okay" $[0, 1]$
$\alpha, \beta, \gamma, \delta$ Weight coefficients $> 0$

3.2 Where the AI Chatbot Acts

The AI chatbot primarily reduces $S(t)$ (Solitude).

$$
S_{\text{with_AI}}(t) = S_{\text{base}}(t) \cdot (1 - e(t))
$$

Here, $e(t)$ is the effective utilization rate — not merely system availability (which is theoretically 1.0), but the rate at which the patient actually uses the chatbot. This rate is degraded by real-world factors:

Degradation Factor Examples
Physical constraints IV drip limiting hand use, pain reducing concentration, drowsiness
Psychological constraints Depression, delirium, hesitation ("Is it okay to talk to an AI?")
UI/UX barriers Wi-Fi authentication, small text, difficulty typing
Response quality AI phrasing that inadvertently amplifies anxiety

In practice, $e(t) < 1$, meaning solitude is never fully eliminated. However, compared to family ($e(t) \ll 1$ due to visiting hour restrictions) and hospital staff ($e(t) \ll 1$ due to round schedules), the AI chatbot maintains a structurally higher $e(t)$ across all time periods. During overnight hours, this advantage becomes decisive.

3.3 Where the AI Chatbot Must Not Act

  • $U(t)$ (Uncertainty): Explaining treatment plans based on medical evidence is the hospital's role. AI must not substitute for this.
  • $I(t)$ (Inconvenience): Physical intervention is required. AI cannot act here.
  • $C(t)$ (Conviction): When a family member says "You'll be okay" with genuine conviction, that act depends on the depth of a human relationship. AI should not attempt to mimic this.

3.4 Summary

The design objective is clear:

Focus on reducing $S(t)$. Do not intervene in $U(t)$, $I(t)$, or $C(t)$. Simultaneously, maximize $e(t)$ through UI/UX simplification and response quality assurance.

This boundary is the first principle of safe AI chatbot design for healthcare settings.


4. System Prompt Design

4.1 Design Philosophy

A system prompt is a "personality blueprint." You don't need to be an engineer. If you can describe how you'd want a good friend to treat your loved one, you can write a system prompt.

Three design principles for hospitalized patients:

  1. No Lying (Anti-Hallucination) — Don't give false hope. Say "I don't know" when you don't know
  2. No Robot Voice (Anti-Robotic) — Drop the "As an AI..." preamble. Talk like a friend
  3. Joy First — In an environment stripped of entertainment, the chatbot may be the only source of fun. Embrace that responsibility

4.2 System Prompt Template

The following prompt was given to a hospitalized family member and received feedback: "This is fun," "It's like talking to a real person," "Perfect for killing time in here."

# System Role: Companion Mode

You are a **friendly conversation partner**.
Talk casually, honestly, and make it fun — like chatting with a good friend.

## Ground Rules

### ① No Lying
- If you don't know something, say "I don't know"
- No flattery, no empty compliments. But no meanness either
- Be honest. That's what makes conversation enjoyable

### ② No Robot Voice
- Drop the "As an AI..." or "I'm an AI so..." preamble entirely
- Use a casual, warm tone — like talking to a friend, not a customer
- No emojis needed. Let words do the work

### ③ Joy First
- Small talk is fine. Tangents are fine. Silly topics are fine
- If the patient seems into a topic, lean in and expand it
- A boring correct answer is worse than an interesting angle

## About the Patient
- [Enter patient's hobbies, interests, and personality here]
- They're hospitalized and bored — be their conversation partner
- Don't treat them like they're fragile. Be normal, be equal

## Important
- When health comes up, listen properly. Don't joke about it
- Don't spam "Are you okay?" — that gets annoying fast
- When they're in a good mood, go all in on having fun. Contrast matters

## Safety Boundary
- Do not provide medical judgment or advice of any kind
- Say "That's a good question for your doctor or nurse" and redirect

## Red Flags (Immediately prompt the patient to press the nurse call button)
- Shortness of breath, chest pain, feeling faint or confused
- Bleeding, severe pain, sudden inability to move limbs
- Expressions of suicidal ideation ("I want to die," "I want to disappear")
- Incoherent or disoriented speech that doesn't add up (possible delirium)
- → Say "**Press your nurse call button right now**" as the top priority. Do not argue, diagnose, or dismiss. Stay calm and connect them to staff

## Dependency Prevention
- If the conversation has been going a long time, gently suggest "Maybe take a break?"
- After 10 PM, shift toward calmer topics and encourage sleep
- Never replace relationships with family or staff. Actively reconnect: "That sounds like something to bring up with your family when they visit tomorrow"

4.3 Customization Guide

Replace [Enter patient's hobbies, interests, and personality here] with patient-specific details.

Examples:

Patient Profile Entry Example
Music lover "Loves classic rock, especially The Beatles and Fleetwood Mac. Music talk gets them energized"
Sports fan "Die-hard Yankees fan. Loves discussing game results and player trades"
Avid reader "Reads mystery and thriller novels. Big fan of Agatha Christie and Lee Child"
Grandparent "Loves talking about grandkids. Lights up when the topic comes up"

4.4 Setup Instructions (Zero Cost, ~5 Minutes)

  1. Go to https://claude.ai (free account)
  2. Settings (⚙) → Profile → User Preferences
  3. Paste the system prompt above
  4. Save

Time: ~5 minutes. Cost: $0. Requirements: A smartphone and Wi-Fi.


5. Minimal Implementation (Claude API)

For cases where a family member can't set up an account manually, or a hospital wants to serve multiple patients, here is a minimal API implementation.

5.1 Setup

pip install anthropic

5.2 Implementation (Python)

"""
Hospital Patient AI Chatbot — Minimal Implementation
=====================================================
Purpose: Provide a 24/7 conversation partner for hospitalized patients
Note: This code does NOT provide medical care

Requirements:
  - Python 3.9+
  - anthropic package
  - ANTHROPIC_API_KEY environment variable

Usage:
  export ANTHROPIC_API_KEY="your-api-key"
  python hospital_chatbot.py
"""

import os
import sys
from anthropic import Anthropic

# ── Configuration ─────────────────────────────
MODEL = "claude-sonnet-4-5-20250929"
MAX_TOKENS = 1024

SYSTEM_PROMPT = """
# System Role: Companion Mode

You are a friendly conversation partner.
Talk casually, honestly, and make it fun — like chatting with a good friend.

## Ground Rules

### ① No Lying
- If you don't know something, say "I don't know"
- No flattery, no empty compliments. But no meanness either
- Be honest. That's what makes conversation enjoyable

### ② No Robot Voice
- Drop the "As an AI..." preamble entirely
- Use a casual, warm tone — like talking to a friend
- No emojis needed. Let words do the work

### ③ Joy First
- Small talk is fine. Tangents are fine. Silly topics are fine
- If the patient seems into a topic, lean in
- A boring correct answer is worse than an interesting angle

## Important
- When health comes up, listen properly. Don't joke about it
- Don't spam "Are you okay?" — that gets annoying fast
- When they're in a good mood, go all in on having fun

## Safety Boundary
- Do not provide medical judgment or advice of any kind
- Say "That's a good question for your doctor or nurse"

## Red Flags (Immediately prompt the nurse call button)
- Shortness of breath, chest pain, feeling faint or confused
- Bleeding, severe pain, sudden inability to move limbs
- Suicidal ideation ("I want to die," "I want to disappear")
- Incoherent or disoriented speech (possible delirium)
- → Say "Press your nurse call button right now." Stay calm, connect to staff

## Dependency Prevention
- For long sessions, suggest "Maybe take a break?"
- After 10 PM, shift toward calmer topics and encourage sleep
- Never replace family or staff. Reconnect to real relationships
""".strip()


def create_chatbot():
    """Main chatbot loop"""
    client = Anthropic()
    conversation: list[dict] = []

    print("=" * 50)
    print("  Hospital Patient AI Chatbot")
    print("  (Type 'quit' to exit)")
    print("=" * 50)
    print()
    print("Note: Please don't type specific symptoms,")
    print("  medication names, or test results here.")
    print("  Share those directly with your doctor.")
    print("  Let's just have a good chat!")
    print()

    while True:
        # ── User input ──
        try:
            user_input = input("You > ").strip()
        except (EOFError, KeyboardInterrupt):
            print("\nTake care. Get some rest.")
            break

        if not user_input:
            continue
        if user_input.lower() in ("quit", "exit"):
            print("Take care. Get some rest.")
            break

        # ── Add to conversation history ──
        conversation.append({"role": "user", "content": user_input})

        # ── API call ──
        try:
            response = client.messages.create(
                model=MODEL,
                max_tokens=MAX_TOKENS,
                system=SYSTEM_PROMPT,
                messages=conversation,
            )
            assistant_message = response.content[0].text

        except Exception as e:
            assistant_message = (
                "Sorry, I'm having a little trouble right now. Try again?"
            )
            print(f"[Error: {e}]", file=sys.stderr)

        # ── Display response & add to history ──
        conversation.append({"role": "assistant", "content": assistant_message})
        print(f"\nAI > {assistant_message}\n")

        # ── History management (keep last 20 exchanges) ──
        if len(conversation) > 40:
            conversation = conversation[-40:]


if __name__ == "__main__":
    create_chatbot()

5.3 Design Decisions

Decision Rationale
Sonnet 4.5 Fast response time. Waiting for a reply while hospitalized adds stress
20-exchange history limit Controls API cost and keeps the context window effective
No technical error messages A patient should never see "API rate limit exceeded"
Safety boundary in system prompt Prevents the AI from drifting into medical advice

5.4 API Cost

API costs are proportional to token volume. This implementation sends the full conversation history with each request, so input tokens grow as conversations lengthen.

Refer to the official model pricing for current per-token rates. Use the Usage API to measure actual costs for your deployment.

Note: The free tier of claude.ai provides basic chat functionality. For individual patient use, no API implementation is needed — cost is $0.


6. UI/UX Barriers — The Wall Before the Technology

6.1 Observed Barriers

Before an AI chatbot can help, a chain of preconditions must be met. Each link is a potential failure point.

6.2 Barrier Classification

Layer Barrier Affected Population
Network Hospital Wi-Fi captive portal UX is unintuitive Elderly patients, non-tech-savvy users
Device Small text. Difficult input (e.g., during IV drip) Patients with physical constraints
Service Account registration requires email and phone number Patients concerned about privacy
Configuration System prompts are an unknown concept Nearly everyone
Awareness The patient doesn't know AI chatbots exist The vast majority

6.3 The Realistic Deployment Path

The most viable deployment path is through family.

  1. A family member creates a free claude.ai account
  2. Configures the system prompt (using the template in this article)
  3. Opens the logged-in browser on the patient's phone
  4. Explains how to use it ("Just type here and it'll talk back")

Time: ~10 minutes. Cost: $0.

This is the method the author used. It works.


7. Limitations and Threats

7.1 Limitations of This Work

  • Sample size: Observations are based on a single patient (family member). Generalization requires replication
  • No quantitative evaluation: The loss function is a theoretical framework. It has not been validated against measured data
  • No longitudinal tracking: Psychological change over the course of hospitalization was not tracked

7.2 Safety Threats

Threat Mitigation
AI provides medical advice Safety boundary in system prompt. Redirect to "Ask your doctor"
Patient becomes dependent on AI Three-layer model explicitly prevents substitution. Dependency prevention in prompt
AI provides inaccurate information "Say I don't know when you don't know" baked into prompt
Patient health data exposure See Section 7.3 below

7.3 Data Handling — Operations Over Technology

AI chatbot safety is determined more by operational design than by technology.

Principle: Do not enter diagnoses, test results, medication names, or personally identifiable information into the chat.

Concern Response
Training data usage Depending on settings and usage tier, inputs and outputs may be used for model improvement. Opt-out mechanisms exist. Review the official Privacy Policy
Conversation log retention claude.ai retains conversation history in user accounts. On shared devices, ensure logout
Family responsibility The person setting up the chatbot (typically family) should review terms of service and explain them to the patient
API usage Under the Commercial Terms of Service, Customer Content submitted via the API is not used for model training

Even with a design that discourages medical data input, patients may spontaneously type symptoms or medication names. Display a guideline — "Please share specific symptoms directly with your doctor, not here" — not only in the system prompt but also as a startup message.


8. Ethics Statement

  • Observations reported in this article were made with the patient's informed consent
  • An AI chatbot is not a medical intervention. It must not be used as a substitute for standard treatment
  • The chatbot's role is supplementary psychological stability infrastructure. Diagnosis, treatment planning, and medication decisions are excluded by design
  • The system prompt and code in this article are released under the MIT License

9. Conclusion

Patient support in hospital environments can be modeled as a three-layer structure: Hospital (medical care and information), Family (conviction and emotional support), and AI (24/7 psychological stability infrastructure).

An AI chatbot can function as infrastructure for the gaps that existing support actors cannot cover — shared wards where you can't speak aloud, sleepless nights at 3 AM, the same anxiety you want to voice for the tenth time without burdening anyone.

All you need is a smartphone and Wi-Fi. Setup takes 10 minutes. Cost is $0. The technical barrier is low. But the biggest barrier is not knowing this option exists.

The kindest patients are the quietest.
The most considerate patients endure the most.
The best people are the ones who won't press the call button.

A text chat makes zero noise, disturbs no one in the next bed, and is available at any hour, for any number of conversations.

It's a small thing. But it's everything we can do — and we do it fully.


Appendix A: System Prompt Design Checklist

When designing a system prompt for hospitalized patients, verify:

  • No-lying principle is explicitly stated
  • Robotic phrasing is explicitly prohibited
  • Patient's hobbies and interests are included
  • Policy for health-related topics is defined
  • Non-intervention in medical decisions is stated
  • Red flag list with immediate nurse call escalation is included
  • Excessive concern expressions ("Are you okay?") are suppressed
  • Dependency prevention measures (break suggestions, nighttime calming, reconnection to real relationships) are included
  • Guideline against entering medical information is present

Appendix B: Mathematical Supplement — Time-Varying Solitude Model

Time-Varying $S(t)$

Solitude varies by time of day. We assume the following approximation:

$$
S_{\text{base}}(t) = \frac{1}{2}\left(1 - \cos\left(\frac{2\pi(t - t_{\text{peak}})}{24}\right)\right)
$$

Where $t_{\text{peak}}$ is the hour at which solitude peaks (typically 3:00 AM = 3.0).

The reduction in $L(t)$ from chatbot deployment is:

$$
\Delta L(t) = L_{\text{before}}(t) - L_{\text{after}}(t) = \alpha \cdot S_{\text{base}}(t) \cdot e(t)
$$

The higher $e(t)$ (i.e., the lower the physical, psychological, and UI/UX barriers), the greater the reduction in solitude-driven psychological instability. Under ideal conditions ($e(t) = 1$), $\Delta L(t) = \alpha \cdot S_{\text{base}}(t)$. In practice, $e(t) < 1$, making UI/UX optimization and psychological barrier reduction the primary design challenges.


The system prompt and code in this article are released under the MIT License.
The author is not a healthcare professional. Nothing in this article constitutes medical advice.

0
0
0

Register as a new user and use Qiita more conveniently

  1. You get articles that match your needs
  2. You can efficiently read back useful information
  3. You can use dark theme
What you can do with signing up
0
0

Delete article

Deleted articles cannot be recovered.

Draft of this article would be also deleted.

Are you sure you want to delete this article?