Skip to content

Detection System

LMIF uses a sophisticated 3-layer detection system to identify potential violations of protected identities.

The detection system runs whenever:

  • A platform checks an identity before avatar creation
  • Scheduled scans run on existing content
  • A creator boxes their identity (scanning for existing violations)
┌─────────────────────────────────────────────────────────────┐
│ DETECTION PIPELINE │
├─────────────────────────────────────────────────────────────┤
│ │
│ LAYER 1: Registry Match │
│ ├── Speed: <100ms │
│ ├── Method: Database matching (name + image) │
│ ├── Output: Confidence score │
│ └── Triggers: High confidence → AUTO_FLAG │
│ Medium confidence → Layer 2 │
│ Low confidence → NO_ACTION │
│ │
│ LAYER 2: AI Analysis │
│ ├── Speed: 2-5 seconds │
│ ├── Method: Vision + semantic analysis │
│ ├── Checks: Name variations, leetspeak, visual similarity │
│ ├── Parody detection: Identifies satire/parody content │
│ └── Output: Enhanced confidence + classification │
│ │
│ LAYER 3: Human Review │
│ ├── Speed: 24-48 hours │
│ ├── Method: Human reviewers in queue │
│ ├── Handles: Edge cases, appeals, legal gray areas │
│ └── Output: Final determination │
│ │
└─────────────────────────────────────────────────────────────┘

The fastest layer, optimized for sub-100ms responses.

  1. Extract name from avatar/content
  2. Query the boxed identity registry
  3. If name matches, compare reference images
  4. Calculate confidence score
ConfidenceAction
HighAUTO_FLAG - Immediate violation created
MediumEscalate to Layer 2
LowNO_ACTION - Allow to proceed
  • Exact name matches
  • Known name variations (registered with claim)
  • Direct image matches

More sophisticated analysis for ambiguous cases.

  1. Vision analysis of image content
  2. Semantic analysis of name/description
  3. Cross-reference with boxed identities
  4. Parody/satire detection
  • Leetspeak variations (T4ylor Sw1ft)
  • Phonetic spellings (Tay Tay)
  • Visual similarity without exact name
  • Deepfakes and AI-generated likenesses

Factors considered:

  • Explicit parody disclaimers
  • Obvious satirical context
  • Transformative elements
  • Commentary/criticism markers

For edge cases that require human judgment.

  • Appeals from avatar creators
  • Legal gray areas
  • High-value/high-stakes decisions
  • Disputed parody claims
  1. Case enters review queue
  2. Reviewer examines all evidence
  3. Decision made within SLA (24-48 hours)
  4. Both parties notified of outcome
  5. Appeal path available

The system requires BOTH image AND name for blocking:

ScenarioResult
”John Smith” aloneNOT blockable (common name)
“John Smith” + Keanu’s faceBLOCKED
Generic image + “Keanu Reeves”BLOCKED
Generic image + “John Smith”NOT blockable

This protects real people who share common names with celebrities.

ActionMeaningWhat Happens
AUTO_FLAGHigh confidence violationViolation created, grace period starts
QUEUE_REVIEWNeeds human reviewAdded to review queue
NO_ACTIONNo match foundNothing happens
PARODY_DETECTEDIdentified as parodyMay be allowed depending on policy

Platforms can trigger detection scans:

// Check single identity
const result = await lmif.identity.check({
name: "Celebrity Name",
imageUrl: "https://example.com/avatar.jpg"
});
// Batch check multiple avatars
const results = await lmif.identity.checkBatch([
{ name: "Name 1", imageUrl: "https://..." },
{ name: "Name 2", imageUrl: "https://..." },
]);
// Trigger scan of existing content
const scan = await lmif.detection.scan({
platformId: "your_platform_id",
contentType: "avatars"
});
{
"detected": true,
"confidence": 0.95,
"layer": 1,
"matchedIdentity": {
"claimId": "claim_xyz789",
"boxId": "box_abc123",
"name": "Taylor Swift",
"policy": "MONETIZE"
},
"classification": "EXACT_MATCH",
"parodyLikelihood": 0.02,
"action": "AUTO_FLAG"
}
LayerSpeedAccuracyUse Case
Layer 1<100ms95%+Real-time checks
Layer 22-5s90%+Fuzzy matching
Layer 324-48h99%+Final decisions