VIRO Knowledge Base

The RICE Engine: How AI Evaluates Video Virality Frame by Frame

The RICE Engine V5 (Retention Intelligence & Content Evaluation) is VIRO's core analysis system. Here is how it works, what it measures, and why frame-by-frame AI evaluation produces predictions that post-analytics cannot.

VIRO Editorial  ·  Updated 2026-02-26  ·  viralroast.com/learn/rice-engine-explained

The fundamental problem with video analytics is temporal: all standard analytics tools measure what happened after you published. By the time you know your video failed the seed test, the algorithm has already deprioritized it, the optimal posting window has closed, and the effort invested in that video is sunk.

The RICE Engine V5 — Retention Intelligence & Content Evaluation — inverts this model. It watches the video before publication and outputs a prediction of what the algorithm will measure during the seed test. The prediction is frame-level and mechanistically attributed: not just "this video will underperform" but "at second 00:08, this specific content decision will cause an estimated retention drop for this specific reason."

The Five Evaluation Dimensions

RICE evaluates video content across five independent dimensions, each scored separately with specific attribution to content decisions:

Dimension 1: Hook Strength

The 0–3 second window is evaluated as a standalone seed test within the seed test. RICE identifies which hook structure is in use (curiosity gap, direct problem, controversial claim, demonstration, or stakes), evaluates whether the execution delivers the required neurological mechanism, measures visual salience in the first 200ms, checks audio-visual synchrony, and assesses promise clarity.

Dimension 2: Retention Curve Integrity

RICE maps the probability of retention drop-off at each second of the video. It identifies which of the three structural failure zones is at risk: Zone 1 (0–3s hook), Zone 2 (30–60% runtime mid-video), or Zone 3 (80–90% pre-close). Each predicted drop is attributed to a specific content decision: a transition that breaks the promise, a pacing issue that increases cognitive load, a missing loop trigger.

Dimension 3: Psychological Trigger Density

VIRO's 20+ psychological engagement triggers are pattern-matched against the video's content, structure, and delivery. Triggers include: curiosity loops, social proof signals, stakes establishment, identity resonance, counterintuitive claims, demonstration evidence, pattern interrupts, and others. RICE identifies which are active, at which timestamps, and whether their timing creates compounding engagement (triggers stacked in the right order) or interference (triggers that cancel each other).

Dimension 4: Platform Algorithm Alignment

Each platform runs a different behavioral test on new content. TikTok's primary metric is 3-second completion rate and share behavior. YouTube Shorts prioritizes completion rate and rewatch ratio. Instagram Reels weights visual quality, cover frame strength, and watermark absence. RICE applies platform-specific models to flag technical suppressors and behavioral mismatches.

Dimension 5: Share/Save Trigger Presence

Completion rate keeps a video in algorithmic distribution. Share and save behavior amplifies it beyond the initial cohort. RICE identifies whether the video contains a specific moment with the emotional, identity, or informational profile that drives social redistribution — and whether that moment is correctly positioned (typically in the final third) to convert viewers already invested in the content.

The GO/NO-GO Output

The primary RICE output is a binary verdict: GO or NO-GO. No score that allows creative rationalization ("7/10 is fine"), no ambiguous "areas for improvement" framework. GO means the video has sufficient structural integrity to justify publication. NO-GO means a specific structural failure exists that will predictably damage seed test performance, and it should be addressed before publication.

The NO-GO output includes: the failure point with timestamp attribution, the mechanism of failure (which of the 5 dimensions, which specific element), the ranked action plan (what to fix first based on expected impact), and time estimates for each fix.

Why Frame-Level Analysis Outperforms Creator Intuition

Creator intuition is systematically biased toward the content they made. They know the context, understand the reference, feel the energy of the performance. None of this is accessible to the cold stranger who encounters the video mid-scroll with no context, no goodwill, and a 3-second tolerance before the next swipe.

RICE operates without context bias. It evaluates the first 3 seconds as if it has never heard of the creator. It evaluates the mid-video structure against documented behavioral retention curves, not against the creator's intended arc. It evaluates the share trigger against behavioral research on what motivates redistribution, not against whether the creator feels proud of the moment.

Analyze Your Video Before You Post

Get a GO/NO-GO verdict, frame-level retention diagnosis, neuromarketing scan, and 3 script-ready hook variants — before the algorithm decides for you.

Start Free Analysis →

Related Guides