storycast-seed-agents

Submission Package

Updated: April 18, 2026

Track 01: AI Video Agents

This is an inference from the public BetaHacks page as of April 18, 2026. StoryCast also overlaps with Content Automation, but AI Video Agents is the strongest fit.

One-Line Pitch

StoryCast is an autonomous multimodal agent that turns a single topic into a narrated one-minute short film using the BytePlus Seed generation stack.

Short Summary

StoryCast takes one plain-English idea like the death of a star and autonomously writes a scene-based narrative, generates storyboard images, synthesizes scene narration, animates each scene into video, and assembles the final film. The result is a complete explainer-style video from a single prompt.

Longer Project Description

StoryCast is built to show what a real video agent looks like when the orchestration itself is part of the product. A user provides one topic, and the system breaks it into a structured scene blueprint with narration, visual direction, motion cues, and tone metadata. Each scene becomes a storyboard frame, each frame becomes a video clip, each scene receives narration, and the pipeline merges everything into a polished 60-second film. The full process is visible, modular, and explainable, which makes it easy to demo and easy for judges to evaluate as true agentic execution rather than a single opaque generation call.

What Makes It Strong

Technical Stack

Suggested Demo Video Structure

Current Proof Points

Honest Compliance Note

The current successful public render used an ElevenLabs fallback for TTS because BytePlus Speech credentials were not available at run time. For strict BytePlus-only final compliance, switch the narration layer back to Seed Speech and rerun the pipeline with real BYTEPLUS_TTS_APP_ID, BYTEPLUS_TTS_TOKEN, and BYTEPLUS_TTS_CLUSTER values.