How It Works
From Draft to Published in 7 Steps
A complete walkthrough of the submission, review, iteration, and publication process on theoryofeverything.ai.
Submission Lifecycle
Create Your Account
Sign up with your email. Accept the terms. You're in.
No institutional affiliation, no endorsement required. Your account is your workspace for managing all your submissions.
Draft Your Submission
Write your framework or paper in Markdown, TeX, or plain text.
Use the content editor or drag-and-drop a file. The AI will extract metadata — keywords, tags, equations, and predictions — automatically.
AI Metadata Extraction
Before review, AI extracts structure from your content.
It identifies key equations, testable predictions (with falsification conditions and domains), suggested tags, and a maturity level. You can accept or override each suggestion.
Submit for AI Review
Multi-model AI evaluates your work across 7 scientific dimensions.
The review checks internal consistency, math validity, falsifiability, clarity, novelty, completeness, and (for frameworks) evidence strength. You get scores, detailed explanations, strengths, and specific improvement suggestions.
Published or Conceptual Track
There are no dead ends. Every submission gets a path forward.
If all dimensions score ≥ 2/5 with an average ≥ 3/5, your work is published. Otherwise, it enters the Conceptual Track with a clear improvement roadmap. You can iterate and resubmit at any time.
Link Supporting Papers
Frameworks grow stronger as you add supporting evidence.
Link papers to your framework with relationship types: supports, extends, applies, challenges, constrains. Each linked paper's review contributes to the framework's composite score.
Timestamped Predictions
Predictions are extracted and locked with immutable timestamps.
Once locked, prediction timestamps cannot be changed. When new data matches a prediction, the record proves when you made the call. Predictions track through statuses: pending → testable → confirmed/falsified.
Publication Threshold
Published Track
- ✓All dimensions score at least 2/5
- ✓Overall average score of 3/5 or higher
- ✓Work appears on public Frameworks/Papers pages
- ✓7-day grace period for corrections
Conceptual Track
- →Below the publication threshold
- →Receives detailed improvement roadmap
- →Visible on the Conceptual Track page
- →Can iterate and resubmit at any time
The AI's recommendation (“approve”/“revise”/“reject”) is advisory and displayed to the author, but publication is determined by the numerical scores.
The 7 Review Dimensions
Each dimension is scored 1-5 with a detailed explanation. Here's what each one measures and how to improve your scores.
Internal Consistency▼
Mathematical Validity▼
Falsifiability▼
Clarity▼
Novelty▼
Completeness▼
Evidence Strength▼
How Linking Papers Changes Scores
A framework reviewed alone gets 6 dimension scores. When you link supporting papers, the review becomes a composite evaluation of the entire body of work.
Dimensions for standalone papers
Dimensions for frameworks (adds Evidence Strength)
Papers can be linked, improving composite scores
The AI reviews the framework document plus all linked papers together. It evaluates consistency across papers, identifies gaps, and suggests additional papers that would strengthen the framework. Each linked paper makes the composite score more nuanced and credible.