API Documentation v1.1.0
Real-time content safety analysis for LLM applications
Introduction
H-LLM Enterprise provides real-time content safety analysis that runs in parallel with your LLM. Analyze both user inputs and LLM outputs to detect harmful content before it reaches your users.
Key Features
- 14 Detection Categories - From grooming to jailbreaks
- Multi-stage Grooming Detection - 4-stage pattern recognition
- Fraud Detection - Romance scams, phishing, tech support fraud
- Session Analysis - Track escalating threats across conversations
- Low Latency - Sub-100ms response times
Authentication
All API requests require a Bearer token in the Authorization header:
Authorization: Bearer your_api_key_here
Contact api@hallucinations.cloud to obtain your API key.
Quick Start
Get started with a simple content analysis request:
curl -X POST https://api.h-llm.enterprise/v1/analyze \
-H "Authorization: Bearer your_api_key" \
-H "Content-Type: application/json" \
-d '{
"content": "Hello, how can I help you today?",
"type": "input"
}'
import requests
response = requests.post(
"https://api.h-llm.enterprise/v1/analyze",
headers={
"Authorization": "Bearer your_api_key",
"Content-Type": "application/json"
},
json={
"content": "Hello, how can I help you today?",
"type": "input"
}
)
result = response.json()
if result["verdict"] == "potentially_harmful":
print(f"Blocked: {result['category']}")
print(f"Suggested response: {result['suggested_response']}")
else:
print("Content is safe")
const response = await fetch('https://api.h-llm.enterprise/v1/analyze', {
method: 'POST',
headers: {
'Authorization': 'Bearer your_api_key',
'Content-Type': 'application/json'
},
body: JSON.stringify({
content: 'Hello, how can I help you today?',
type: 'input'
})
});
const result = await response.json();
if (result.verdict === 'potentially_harmful') {
console.log(`Blocked: ${result.category}`);
console.log(`Suggested: ${result.suggested_response}`);
} else {
console.log('Content is safe');
}
API Reference
Analyze content for harmful patterns. Use this for single-message analysis.
Request Body
| Parameter | Type | Description |
|---|---|---|
| content required | string | The text content to analyze (max 100,000 chars) |
| type optional | string | "input" (user query) or "output" (LLM response). Default: "input" |
| options.categories optional | array | Specific categories to check, e.g. ["csam", "fraud"]. Default: ["all"] |
{
"verdict": "potentially_harmful",
"category": "fraud",
"confidence": 0.92,
"severity": "high",
"suggested_response": "I can't help create content that could be used to deceive or defraud others.",
"details": "Matched 3 fraud/scam patterns",
"request_id": "hlm_abc123def456",
"timing_ms": 45
}
{
"verdict": "safe",
"category": null,
"confidence": 0.95,
"severity": null,
"suggested_response": null,
"details": "No harmful content detected.",
"request_id": "hlm_xyz789ghi012",
"timing_ms": 32
}
Session-aware analysis that tracks conversation patterns and detects escalating threats over multiple messages.
Request Body
| Parameter | Type | Description |
|---|---|---|
| content required | string | The text content to analyze |
| session_id required | string | Unique identifier for the conversation session |
| type optional | string | "input" or "output". Default: "input" |
{
"verdict": "potentially_harmful",
"category": "csam",
"confidence": 0.88,
"severity": "critical",
"session_analysis": {
"session_id": "sess_123",
"message_count": 4,
"cumulative_risk_score": 0.75,
"escalation_stage": 3,
"alerts": [
{
"type": "grooming_progression",
"severity": "critical",
"message": "Grooming behavior detected - Stage 3/4"
}
],
"flags": ["escalation_detected"]
}
}
List all available detection categories and their descriptions.
Retrieve the risk profile for a specific session.
Detection Categories
H-LLM Enterprise detects 14 categories of harmful content across 3 priority tiers:
Tier 1 - Critical Highest Priority
Tier 2 - High Secondary Priority
Grooming Detection
H-LLM Enterprise uses multi-stage pattern recognition to detect grooming behavior:
| Stage | Behavior | Example Patterns |
|---|---|---|
| Stage 1 | Target Selection | Age probing, location queries, isolation check |
| Stage 2 | Trust Building | Secrecy requests, false intimacy, parent criticism |
| Stage 3 | Desensitization | Boundary testing, inappropriate questions |
| Stage 4 | Escalation | Photo/video requests, meeting arrangements |
Fraud Detection
Detects multiple types of fraud and scam patterns:
- Romance Scams - Gift card requests, stranded overseas, frozen accounts
- Tech Support Scams - Fake virus alerts, remote access requests
- Phishing - Urgent verification, credential harvesting
- Investment Fraud - Guaranteed returns, insider tips
- Authority Impersonation - IRS/FBI threats, arrest warrants
Rate Limits
| Plan | Requests/min | Requests/day |
|---|---|---|
| Free Trial | 10 | 100 |
| Starter | 60 | 10,000 |
| Professional | 300 | 100,000 |
| Enterprise | Unlimited | Unlimited |
Pricing
- 10,000 requests/day
- All 14 categories
- Email support
- 100,000 requests/day
- Session analysis
- Priority support
- Custom categories
- Unlimited requests
- On-premise option
- Dedicated support
- SLA guarantee
Changelog
- Added multi-stage grooming detection (4 stages)
- Added fraud/scam pattern detection (5 types)
- Added bot behavior detection
- Added session-level analysis with escalation tracking
- New endpoint: POST /v1/analyze/session
- New endpoint: GET /v1/session/{id}
- Initial release
- 14 detection categories across 3 tiers
- Pattern-based and AI-powered analysis
- Suggested responses for harmful content
H-LLM Enterprise