You are interacting with an AI system, not a human pastor, counsellor, or minister. This notice explains how that AI works and what its limitations are, in compliance with the EU AI Act (Article 50) and UK AI transparency expectations.
You are talking to an AI
Divinely is powered entirely by large language models (LLMs) — a type of artificial intelligence trained on large amounts of text. No human reads or responds to your messages. When you use the chat, an AI model generates your response based on your message, the conversation history, and a set of instructions we provide.
Divinely does not employ any human spiritual directors, counsellors, or clergy who review or contribute to responses.
Which AI models does Divinely use?
- Free tier: Google Gemini Flash Lite — a large language model developed by Google LLC, based in the United States. Your messages are sent to Google's API for processing.
- Standard tier: Anthropic Claude Haiku 4.5 — developed by Anthropic PBC, based in the United States. Your messages are sent to Anthropic's API for processing.
- Max tier: Anthropic Claude Sonnet 4.6 — Anthropic's more capable model, also processed via Anthropic's API.
In all cases, your messages are transmitted to the relevant third-party AI provider. See our Privacy Policy for details on data protection safeguards.
What data is sent to the AI?
Each time you send a message, the following is sent to the AI provider:
- Your message text
- Recent conversation history (up to the last 10 message pairs)
- A system prompt written by us that instructs the AI to act as a Christian spiritual companion
- Your profile context (denomination, life stage, faith journey) if you have filled it in — this helps the AI give more relevant responses
The following is never sent to the AI:
- Your email address
- Your payment details or subscription status
- Your prayer journal entries (unless you explicitly click "Reflect on my prayers")
- Any data from other users
How the AI is instructed to behave
We provide the AI with a system prompt that tells it to:
- Respond with warmth, empathy, and humility
- Ground responses in the Bible and cite specific scripture
- Acknowledge human complexity before offering guidance
- Clearly state it is not a substitute for a pastor, counsellor, or professional
- Immediately recommend professional help if a message suggests mental health crisis or self-harm
- Keep responses readable and avoid excessive length
We do not instruct the AI to create emotional dependency, encourage continued usage beyond what is helpful, or use persuasive techniques. If you notice behaviour that seems manipulative or harmful, please report it using the feedback button.
Limitations you must be aware of
AI has real limitations that are especially important in a spiritual context:
- Factual errors: AI can misquote scripture, cite incorrect references, or conflate theological traditions. Always verify important quotes with a physical Bible or trusted source.
- No spiritual discernment: AI processes text statistically — it does not have faith, prayer, or spiritual experience. It cannot replace the discernment of a trained pastor or spiritual director.
- Denominational variation: Christian theology varies significantly across denominations. The AI may not reflect your tradition's specific interpretation, sacramental theology, or practice.
- Context limitations: the AI can only see the current conversation. It has no memory of previous separate conversations and cannot track your faith journey over time the way a pastor would.
- Not real-time: AI models have training data cutoffs and do not know about very recent events, publications, or theological developments.
- Cannot replace community: the AI cannot provide the sacraments, baptism, communion, confession, or the lived experience of Christian community. These require human presence.
EU AI Act classification
Under the EU AI Act 2024, Divinely is classified as a Limited Risk AI system (Article 6). As an AI chatbot that interacts with users, it is subject to transparency obligations under Article 50:
- Users are clearly informed at all times that they are interacting with an AI, not a human
- AI-generated responses are marked throughout the interface
- The system does not use subliminal techniques or exploit vulnerabilities to influence users
- The system does not attempt to deceive users about its nature
- This AI Transparency Notice is provided in plain, accessible language
Divinely does not fall under the high-risk categories of the EU AI Act (Annex III). It is not used for education, employment, essential services, biometrics, or law enforcement purposes.
Human oversight and feedback
Every AI response includes a way to report it as unhelpful or inappropriate. Reported responses are reviewed by our team to identify patterns and improve the system prompt. We take reports of harmful, manipulative, or theologically misleading responses seriously.
We conduct regular reviews of AI outputs to check for accuracy, safety, and alignment with our stated values. We do not use your conversation data to retrain AI models.
Contact
Questions about AI use, transparency, or to report a concerning response: ai@divinely.me