Predict If Your Video Will Perform Before You Spend the Posting Slot

Every analytics dashboard you've ever used shows you what already happened. Your video got 400 views. Your watch time was 3 seconds. Your hook retention was 12%. Great — now you know it failed, and there's nothing you can do about it. You already spent the posting slot, already sent the signal to the algorithm, already burned through 300 test impressions on content that was broken from the start. The Viral Roast prediction API flips this. You send the video before posting and get back a full structural verdict — GO, NO_GO, or EDIT_REQUIRED — plus a per-platform scorecard, psychological trigger map, and a prioritized action plan telling you exactly what to fix. Not a magic number, not a guarantee. A structural assessment: does this video have the bones to perform, based on what actually drives completion, shares, and replays on each platform? Think of it as a dress rehearsal that gives you notes. Sometimes the notes confirm what you already felt. Sometimes they surprise you. Three times in the last six months, the notes saved me from throwing away videos that ended up being some of my best performers.

The 5 Things the Prediction API Actually Measures (And 3 Things It Doesn't Claim To)

Let me walk you through exactly what the prediction API evaluates, because I'm tired of people assuming it's some kind of crystal ball. It's not. It's a structural assessment tool, and it measures five specific things. First, hook strength — does the first one to two seconds give a stranger a reason to stop scrolling? The API returns a saccadicLockScore from 0 to 10 that tells you how well the opening frames grab attention, a list of hook techniques it detected versus the ones you missed, and a goldenWindowVerdict on whether your opening survives the critical first moments. I had a video where the saccadicLockScore came back at 3 because I spent the first 1.3 seconds adjusting my camera angle before speaking. The API flagged the missed hook techniques. I trimmed those frames and rebuilt the opening — the score jumped to 7 on the next pass. Second, per-platform scorecards — the API scores your video 0 to 100 on each platform separately, and each scorecard comes with specific fixes, strengths, weaknesses, and a per-platform verdict. The same video might score 78 on TikTok, 65 on Reels, and 82 on YouTube Shorts, with completely different fix lists for each. Third, psychological trigger mapping — the API identifies which psychological triggers are ACTIVE in your video and which ones are MISSING. It also generates an engagement forecast: what types of comments your video is likely to generate, a discussion depth score, and share trigger detection. This tells you whether your video creates the kind of emotional response that makes people tag a friend or leave a comment. Fourth, comparative market analysis — the API places your video within your niche, identifies where it sits in the trend lifecycle (EMERGING, RISING, PEAK, SATURATED, or ZOMBIE), gives you a saturation score, and runs content fatigue analysis. This is the difference between knowing your video is structurally solid and knowing whether the market is ready for it. Fifth, theoretical performance projections — the API returns theoreticalRetention, theoreticalViews, theoreticalLikes, and theoreticalShares, each showing current versus optimized projections. These aren't promises — they're estimates based on your video's structural characteristics. Combined, you also get an overall publishing decision (GO, NO_GO, or EDIT_REQUIRED), a verdict (VIRAL, BORDERLINE, SOFT, or NO_GO), a prioritized action plan with execution shortcuts, AI artifacts detection, and a brand consistency score.

Now here's what the prediction API does not measure, and I think this matters more than what it does measure because this is where expectations get misaligned. First, it does not predict performance from a script alone. You need to send an actual video — as a file or a URL — because the analysis depends on visual, audio, and structural elements that don't exist in text. If you want to test ideas before filming, the approach is different — more on that later. Second, the API does not guarantee specific view counts. The theoretical projections give you current versus optimized estimates, but they're structural projections, not promises. A video with strong projections can still underperform if the timing is wrong or the topic is oversaturated. I posted a video about a specific project management method the same day a major tech CEO tweeted about using that exact method. The video got 340,000 views despite a BORDERLINE verdict — the topic momentum carried it far beyond what the structure alone would have delivered. Third, the API does not measure cultural moment sensitivity. Some videos hit because they tap into a collective emotion that exists right now — post-election anxiety, a celebrity scandal, a seasonal mood that makes certain topics resonate more than they normally would. The API evaluates structure, not cultural context. It doesn't read the room because it doesn't know what room you're in. This means a video can get a VIRAL verdict and still underperform because of timing, and a video with a SOFT verdict can go viral because the topic is on fire right now. The prediction is about structural readiness, not cosmic alignment. Once you understand that distinction, the verdicts and scores become genuinely useful instead of frustrating.

3 Videos I Almost Didn't Post That the API Said Would Work

I want to tell you about two specific videos that changed how I think about my own creative judgment. Video one was a walkthrough of how to organize a spreadsheet for content planning. I'm not kidding. A spreadsheet. I had filmed it on a Tuesday afternoon because I needed to fill a content slot and I had nothing else ready. The thumbnail was a screenshot of Google Sheets with some colored cells. The topic felt embarrassing — I kept thinking "nobody wants to watch someone explain a spreadsheet." I was about to archive it and film something else when I decided to run it through the prediction API as a test. The verdict came back VIRAL. I was surprised. The saccadicLockScore was 8 out of 10, and when I looked at why, it made sense in hindsight: the hook was "the spreadsheet that replaced my $200/month content calendar tool." The API detected multiple hook techniques in those opening frames — information gap, concrete dollar amount, implied transformation. The TikTok scorecard came in at 84 with barely any fixes needed, and the psychological profile showed several engagement triggers as ACTIVE, including the kind that drive save-and-share behavior. I posted it expecting maybe 2,000 views based on my gut feeling about the topic. It got 280,000 views. The audience didn't care that the topic was boring to me — the structure pulled them through. Video two was an accident. My tripod broke mid-shoot and I ended up filming from a low angle, basically looking up at my face. I thought it looked terrible — unflattering angle, weird perspective, not what my audience was used to seeing. I almost deleted it immediately. The API verdict: BORDERLINE. Not amazing, but the saccadicLockScore was 7 out of 10 because the unusual camera angle actually created visual novelty — it looked different from everything else in the feed, which is exactly what makes someone pause their scroll. The per-platform scores ranged from 65 to 73 depending on the platform. I posted it expecting it to be a throwaway. It got 150,000 views. The weird angle that I thought was a liability was actually an asset because it broke the pattern of what my audience expected to see from me.

Video three is the one that really messed with my head. It was a talking-head video where I ranted about a SaaS tool I'd been paying for and finally canceled. Low production value — no B-roll, no fancy graphics, no text overlays. Just me looking into the camera and complaining for 47 seconds. I thought it was too raw, too negative, too unpolished. I almost didn't post it because I was worried it would make me look unprofessional. I ran it through the API mostly to confirm my suspicion that it would get a NO_GO verdict. Instead it came back VIRAL. The saccadicLockScore was 9 out of 10 — the highest I'd gotten in weeks. The hook was "I just canceled a tool I've paid $89/month for three years and I should have done it two years ago." When I read the scorecard breakdown, it clicked: the hook techniques detected included specific dollar anchoring, timeframe tension, and emotional admission — all three landing in the first two seconds. The psychological profile showed almost every engagement trigger as ACTIVE, with the comment forecast predicting high discussion depth because genuine frustration creates natural tension shifts — my voice intensity went up and down as I moved between specific complaints and broader conclusions about the tool, and those shifts kept the emotional arc interesting even though the production was minimal. I posted it. It got 90,000 views. The bigger lesson from all three of these videos is this: my gut feeling about which videos are "good" is shaped by aesthetics, production quality, and my own insecurity about whether a topic is impressive enough. The API doesn't have any of that baggage. It's looking purely at structure — does the hook lock attention, do the psychological triggers fire, does the format work for each platform? Three separate times, the API saw something I was blind to because I was too close to the content and too caught up in my own ideas about what "quality" looks like. That doesn't mean I override my judgment every time the API disagrees. It means I at least stop and check whether my judgment is about structure or about ego. About half the time, it's ego.

How to Go from Single Analyses to a Content Strategy

Here's the step-by-step process I used to turn the prediction API from a novelty into something that actually changed my content output, and you can follow the same path because it's straightforward once you see the logic. Step one: analyze 10 videos you've already posted and compare the verdicts and platform scores to what actually happened. Pick a mix — some that performed well, some that flopped, some that landed in the middle. You're not trying to prove the API works or doesn't work. You're trying to calibrate. You'll immediately see whether the verdicts and scorecard numbers track with reality for your specific niche and content style. When I did this, 7 out of 10 videos were in the right order — the ones with VIRAL verdicts and higher platform scores had performed better than the BORDERLINE and SOFT ones. The three that were out of order all had clear external explanations: one rode a trending topic, one got killed by a platform glitch that suppressed it for 12 hours, and one was a collaboration that got boosted by the other creator's audience sharing it. Step two: analyze your next 10 videos before posting them. Write down the verdicts and platform scores, then post everything regardless. Don't filter yet. You're building your personal calibration data — you need to see how the API assessments map to your actual results before you start making decisions based on them. After this batch, you'll have 20 data points, which is enough to see a pattern. Step three: start using the verdicts to make decisions. The API gives you a clear publishing decision — GO, NO_GO, or EDIT_REQUIRED — and that's your starting point. For me, videos getting a NO_GO verdict almost never performed well regardless of how interesting the topic was. EDIT_REQUIRED videos were worth fixing because the action plan told me exactly what to change in priority order. GO videos with a VIRAL verdict had a strong hit rate, and even when they didn't blow up, they still performed at an acceptable baseline level.

Step four is where the real payoff starts, and most people never get here because they treat the analysis as a one-off tool instead of a data source. The API stores your analysis history, and that history is where the patterns live. After a month of analysis data — assuming you're posting regularly, that's 30 to 60 analyzed videos — look at the patterns in your scorecard breakdowns. Which categories of weaknesses keep showing up across your content? For me, it was hooks. When I analyzed my first 45 videos' results, the majority of the ones getting SOFT or NO_GO verdicts had weak saccadicLockScores and multiple missed hook techniques. The per-platform scores were usually decent. The psychological triggers were firing. But the hooks were consistently weak because I had a habit of starting videos with context-setting sentences that assumed the viewer already cared about the topic. Things like "So I've been testing this new approach to content batching" — that hook requires the viewer to already care about content batching, which a stranger scrolling through their feed doesn't. Once I saw that pattern in my accumulated data, I rewrote all my hook templates to front-load the specific claim or question instead of building up to it. "I filmed 30 videos in 4 hours using a system I'll show you in 45 seconds" instead of "So I've been testing this new approach to content batching." My average platform scores jumped by nearly 10 points in two weeks just from fixing hooks. The longer-term play is even more valuable. After three months of analysis data, you have a personal library of what works for your niche. You can see which hook techniques consistently get detected in your best performers, which psychological triggers your audience responds to, which platforms your content is naturally strongest on, and where in the trend lifecycle your best topics were when you posted them. That library becomes your content strategy — not based on what some growth guru said works in a YouTube video they posted last year, but based on what your actual analysis data shows works for your specific audience on your specific platforms right now. I keep a simple document with my top hook techniques that consistently score well, the psychological triggers my audience responds to most, and the platform-specific patterns that drive my highest scorecard numbers. When I sit down to plan content, that document is my starting point, and every month I update it with whatever new patterns emerged from the latest batch of analysis data.

Hook Strength Analysis

The API returns a saccadicLockScore from 0 to 10 that measures how effectively your opening frames grab and hold a viewer's attention. Alongside the score, you get a breakdown of hook techniques detected versus missed — specific structural elements like information gaps, emotional hooks, pattern interrupts — and a goldenWindowVerdict that tells you whether your video survives the critical first moments. This isn't based on keyword matching or clickbait formula databases. It's based on the structural elements that create curiosity in the opening frames. I had a video where the saccadicLockScore was 3 because I spent the first 1.3 seconds adjusting my camera angle before speaking. The API flagged two missed hook techniques. I trimmed those frames, added a direct opening line, and the score jumped to 7 on the next pass.

Per-Platform Scorecard

The same video performs differently on different platforms, and the scorecard reflects that reality. The API scores your video 0 to 100 on each platform separately, and each scorecard comes with specific fixes to implement, strengths to keep, weaknesses to address, and a per-platform verdict. TikTok rewards rapid pacing and hooks that land in the first second. Instagram Reels favors a slightly longer setup with a clear payoff moment. YouTube Shorts tolerates more depth but has different structural expectations. One video, multiple different scores, multiple different fix lists. I stopped cross-posting the same video everywhere once I saw how different the platform scores were — now I make platform-specific edits based on each scorecard's fix suggestions, and the performance difference is measurable.

Psychological Trigger Map

The API identifies which psychological triggers in your video are ACTIVE and which are MISSING, giving you a clear picture of the emotional machinery driving engagement. Beyond the trigger status, you get an engagement forecast that predicts what types of comments your video will generate, a discussion depth score estimating how much conversation it sparks, and share trigger detection that tells you whether the content has the elements that make someone forward it to a friend. This is the difference between knowing a video "feels engaging" and knowing exactly which emotional levers are pulled and which ones you left on the table. I've used the MISSING triggers list to add a single line to a video that flipped a trigger from inactive to active, and watched the comment section completely change character.

Comparative Market Analysis

The API doesn't just look at your video in isolation — it places it within your niche and tells you where you stand. You get a trend lifecycle assessment (EMERGING, RISING, PEAK, SATURATED, or ZOMBIE) that tells you whether the wave you're trying to ride is still building or already crashing. A saturation score tells you how crowded the space is. Content fatigue analysis flags whether audiences have been overexposed to your type of content recently. This is the context that turns a good platform score into a strategic decision. I had a video that scored well across the board but the trend lifecycle came back SATURATED and the saturation score was through the roof — I held it for two weeks until the wave passed and the fatigue cleared, then posted it into a less crowded window.

Prioritized Action Plan

Every analysis comes with a prioritized action plan that tells you exactly what to fix and in what order of impact. No vague suggestions like "improve your hook" — the plan gives you specific tasks ranked by how much they'll move the needle, plus execution shortcuts that tell you how to implement each fix efficiently. The action plan connects directly to the scorecard weaknesses and missing psychological triggers, so every suggested change maps back to a specific problem the API identified. When a video comes back with an EDIT_REQUIRED publishing decision, the action plan is your repair manual. I've had videos go from EDIT_REQUIRED to GO by working through just the top two items on the action plan — usually takes 15 minutes of re-editing, not a complete reshoot.

How accurate is this really? Give me real numbers.

Here's what I've tracked over six months of using the prediction API on my own content and on content I consult on for clients. I analyzed 847 videos before posting them and tracked actual performance across TikTok, Reels, and YouTube Shorts. Videos that received a VIRAL verdict consistently outperformed those with BORDERLINE or SOFT verdicts, and videos with high per-platform scorecard numbers (above 75) almost always cleared my performance thresholds. The correlation is clear but it's not perfect, and I'd be lying if I told you otherwise. About 8% of videos with a VIRAL verdict still flopped — meaning under 1,000 views — and when I dug into why, it was almost always timing or topic saturation. One had strong scores across every platform but got 600 views because three major creators in the same niche posted about the exact same topic that morning. Another had a VIRAL verdict but got 400 views because I posted it during a major sporting event when my audience was watching the game instead of scrolling. On the other side, about 5% of SOFT-verdict videos performed surprisingly well — above 10,000 views — usually because the topic itself was trending hard enough to carry a structurally weak video on pure momentum. The analysis doesn't guarantee outcomes. What it does is shift the probability distribution in your favor. If you're posting 10 videos a week and the API helps you identify and fix the 2 or 3 that were structurally broken before posting — thanks to the per-platform fix lists and the prioritized action plan — that's 2 or 3 fewer wasted posting slots and 2 or 3 fewer bad signals sent to the algorithm about your account. Over a month, that compounds. After three months of consistently posting structurally sound content, the difference in account-level algorithmic trust is measurable — my impression rate on new posts increased by about 35% compared to the three months before I started using the API.

Can I predict performance from just a script, without filming?

No — the API requires an actual video file (base64 or URL) to run its analysis. It needs the visual, audio, and structural elements that only exist in a finished video. You can't send a script and get a prediction, because the saccadicLockScore, hook technique detection, per-platform scorecards, psychological trigger analysis, and comparative market positioning all depend on analyzing actual video content. That said, you don't need a polished final cut. I regularly send rough cuts — quick recordings from my phone with no editing, no text overlays, no music. The API analyzes whatever you send, and a rough cut is enough to catch the biggest structural problems: weak hooks, missing psychological triggers, poor platform fit. My workflow is to film a quick 30-second version of the concept on my phone, send it through the API, check the verdict and action plan, and then decide whether to invest the time in a full production. If the rough cut comes back NO_GO with a weak saccadicLockScore and mostly MISSING triggers, I know the concept needs fundamental rework before I spend hours on editing and polish. If it comes back BORDERLINE or better with a clear action plan, I use those fix suggestions to guide the full production. It's not as fast as analyzing a script would be, but it's far more reliable because the API is evaluating the actual content your audience will see, not a text approximation of it.

What's the difference between the publishing decision and the full analysis?

They're part of the same response but they serve different purposes and you'd use them in different situations. The publishingDecision is a three-way gate: GO, NO_GO, or EDIT_REQUIRED. It's designed for automated pipelines where you need a clear decision that doesn't require human interpretation. You feed videos into a queue, the API checks each one, and only the ones that get GO move forward to posting. EDIT_REQUIRED ones get flagged for fixes. NO_GO ones get pulled. No ambiguity. The full analysis is everything else that comes back in the same response: the overall verdict (VIRAL, BORDERLINE, SOFT, or NO_GO), per-platform scorecards with scores 0-100 and specific fixes, the saccadicLockScore and hook technique breakdown, psychological trigger mapping, comparative niche analysis with trend lifecycle and saturation data, theoretical performance projections, and the prioritized action plan. That detail is what makes it useful for manual workflow — when you're sitting there with a video and you want to understand what's working and what isn't before you decide whether to post it, rework it, or scrap it. In my workflow, I use both layers. My automated pipeline uses the publishingDecision as a filter — anything that isn't GO gets flagged for manual review instead of auto-posting. But when I'm working manually on content I care about, I dig into the full analysis because I want to see the per-platform scorecard breakdowns and the action plan. If a video gets EDIT_REQUIRED with a saccadicLockScore of 3 and several missing hook techniques, I know exactly what to fix: rework the opening, re-film the first few seconds, and run the analysis again. The action plan tells me what to prioritize and the scorecard weaknesses tell me exactly where to focus on each platform.

I got a VIRAL verdict but the video flopped. What happened?

This happens and it's frustrating, but there are three likely explanations and all of them exist outside what structural analysis can see. First explanation: topic saturation. Your video was structurally excellent — strong saccadicLockScore, high platform scores, active psychological triggers, solid scorecard across the board — but 30 other creators posted about the same topic this week and the audience was already fatigued before your video ever reached them. Now, the API does include comparative analysis with trend lifecycle and saturation scoring, so check those carefully — if the trend came back SATURATED or ZOMBIE and the saturation score was high, the API was actually warning you even though the structural verdict was strong. The lesson: a VIRAL verdict on the structural side combined with a SATURATED trend lifecycle is a signal to wait, not to post immediately. Second explanation: audience mismatch. The video was structurally strong for a general audience, but your specific followers expect a different tone, topic, or format from you. If you normally post humorous commentary and you suddenly post a serious step-by-step tutorial, your existing audience might not engage even though the video is well-constructed — it's just not what they followed you for. The first few hundred viewers who see your post are mostly existing followers, and if they don't engage, the algorithm doesn't push it further. Structural quality and audience expectation alignment are two different things. Third explanation, and this one annoys people but it's real: algorithmic randomness. The seed test group — the first 200 to 500 people who see your video in the initial distribution phase — is partly random. Sometimes you get an unrepresentative sample. Your video about productivity tips gets shown to a random cluster of people who happen to be sports fans and couldn't care less about productivity. They don't engage, the algorithm reads that as a signal that the content isn't interesting, and the video never gets pushed to the broader audience that would have loved it. This is documented, it's known, and it happens to everyone. A VIRAL verdict means the video was structurally ready to perform. It does not mean the world was ready for the video at that moment. Structure is one piece of the puzzle. Timing, audience context, and distribution luck are the other pieces, and no prediction tool controls those.

Does Instagram's Originality Score affect my content's reach?

Yes. Instagram introduced an Originality Score in 2026 that fingerprints every video. Content sharing 70% or more visual similarity with existing posts on the platform gets suppressed in distribution. Aggregator accounts saw 60-80% reach drops when this rolled out, while original creators gained 40-60% more reach. If you cross-post from TikTok, strip watermarks and re-edit with different text styling, color grading, or crop framing so the visual fingerprint feels native to Instagram.

How does YouTube's satisfaction metric affect video performance in 2026?

YouTube shifted to satisfaction-weighted discovery in 2025-2026. The algorithm now measures whether viewers felt their time was well spent through post-watch surveys and long-term behavior analysis, not just watch time. Videos where viewers subscribe, continue their session, or return to the channel receive stronger distribution. Misleading hooks that inflate clicks but disappoint viewers will hurt your channel performance across all formats, including Shorts and long-form.