Detection System
Detection System
Section titled “Detection System”LMIF uses a sophisticated 3-layer detection system to identify potential violations of protected identities.
Overview
Section titled “Overview”The detection system runs whenever:
- A platform checks an identity before avatar creation
- Scheduled scans run on existing content
- A creator boxes their identity (scanning for existing violations)
Three-Layer Defense
Section titled “Three-Layer Defense”┌─────────────────────────────────────────────────────────────┐│ DETECTION PIPELINE │├─────────────────────────────────────────────────────────────┤│ ││ LAYER 1: Registry Match ││ ├── Speed: <100ms ││ ├── Method: Database matching (name + image) ││ ├── Output: Confidence score ││ └── Triggers: High confidence → AUTO_FLAG ││ Medium confidence → Layer 2 ││ Low confidence → NO_ACTION ││ ││ LAYER 2: AI Analysis ││ ├── Speed: 2-5 seconds ││ ├── Method: Vision + semantic analysis ││ ├── Checks: Name variations, leetspeak, visual similarity ││ ├── Parody detection: Identifies satire/parody content ││ └── Output: Enhanced confidence + classification ││ ││ LAYER 3: Human Review ││ ├── Speed: 24-48 hours ││ ├── Method: Human reviewers in queue ││ ├── Handles: Edge cases, appeals, legal gray areas ││ └── Output: Final determination ││ │└─────────────────────────────────────────────────────────────┘Layer 1: Registry Match
Section titled “Layer 1: Registry Match”The fastest layer, optimized for sub-100ms responses.
How It Works
Section titled “How It Works”- Extract name from avatar/content
- Query the boxed identity registry
- If name matches, compare reference images
- Calculate confidence score
Triggers
Section titled “Triggers”| Confidence | Action |
|---|---|
| High | AUTO_FLAG - Immediate violation created |
| Medium | Escalate to Layer 2 |
| Low | NO_ACTION - Allow to proceed |
What It Catches
Section titled “What It Catches”- Exact name matches
- Known name variations (registered with claim)
- Direct image matches
Layer 2: AI Analysis
Section titled “Layer 2: AI Analysis”More sophisticated analysis for ambiguous cases.
How It Works
Section titled “How It Works”- Vision analysis of image content
- Semantic analysis of name/description
- Cross-reference with boxed identities
- Parody/satire detection
What It Catches
Section titled “What It Catches”- Leetspeak variations (
T4ylor Sw1ft) - Phonetic spellings (
Tay Tay) - Visual similarity without exact name
- Deepfakes and AI-generated likenesses
Parody Detection
Section titled “Parody Detection”Factors considered:
- Explicit parody disclaimers
- Obvious satirical context
- Transformative elements
- Commentary/criticism markers
Layer 3: Human Review
Section titled “Layer 3: Human Review”For edge cases that require human judgment.
When It’s Used
Section titled “When It’s Used”- Appeals from avatar creators
- Legal gray areas
- High-value/high-stakes decisions
- Disputed parody claims
Review Process
Section titled “Review Process”- Case enters review queue
- Reviewer examines all evidence
- Decision made within SLA (24-48 hours)
- Both parties notified of outcome
- Appeal path available
Key Rule: Image + Name Together
Section titled “Key Rule: Image + Name Together”The system requires BOTH image AND name for blocking:
| Scenario | Result |
|---|---|
| ”John Smith” alone | NOT blockable (common name) |
| “John Smith” + Keanu’s face | BLOCKED |
| Generic image + “Keanu Reeves” | BLOCKED |
| Generic image + “John Smith” | NOT blockable |
This protects real people who share common names with celebrities.
Detection Actions
Section titled “Detection Actions”| Action | Meaning | What Happens |
|---|---|---|
AUTO_FLAG | High confidence violation | Violation created, grace period starts |
QUEUE_REVIEW | Needs human review | Added to review queue |
NO_ACTION | No match found | Nothing happens |
PARODY_DETECTED | Identified as parody | May be allowed depending on policy |
Detection API
Section titled “Detection API”Platforms can trigger detection scans:
// Check single identityconst result = await lmif.identity.check({ name: "Celebrity Name", imageUrl: "https://example.com/avatar.jpg"});
// Batch check multiple avatarsconst results = await lmif.identity.checkBatch([ { name: "Name 1", imageUrl: "https://..." }, { name: "Name 2", imageUrl: "https://..." },]);
// Trigger scan of existing contentconst scan = await lmif.detection.scan({ platformId: "your_platform_id", contentType: "avatars"});Response Format
Section titled “Response Format”{ "detected": true, "confidence": 0.95, "layer": 1, "matchedIdentity": { "claimId": "claim_xyz789", "boxId": "box_abc123", "name": "Taylor Swift", "policy": "MONETIZE" }, "classification": "EXACT_MATCH", "parodyLikelihood": 0.02, "action": "AUTO_FLAG"}Detection Accuracy
Section titled “Detection Accuracy”| Layer | Speed | Accuracy | Use Case |
|---|---|---|---|
| Layer 1 | <100ms | 95%+ | Real-time checks |
| Layer 2 | 2-5s | 90%+ | Fuzzy matching |
| Layer 3 | 24-48h | 99%+ | Final decisions |