How It Works

From Draft to Published in 7 Steps

A complete walkthrough of the submission, review, iteration, and publication process on theoryofeverything.ai.

Submission Lifecycle

1

Create Your Account

Sign up with your email. Accept the terms. You're in.

No institutional affiliation, no endorsement required. Your account is your workspace for managing all your submissions.

2

Draft Your Submission

Write your framework or paper in Markdown, TeX, or plain text.

Use the content editor or drag-and-drop a file. The AI will extract metadata — keywords, tags, equations, and predictions — automatically.

3

AI Metadata Extraction

Before review, AI extracts structure from your content.

It identifies key equations, testable predictions (with falsification conditions and domains), suggested tags, and a maturity level. You can accept or override each suggestion.

4

Submit for AI Review

Multi-model AI evaluates your work across 7 scientific dimensions.

The review checks internal consistency, math validity, falsifiability, clarity, novelty, completeness, and (for frameworks) evidence strength. You get scores, detailed explanations, strengths, and specific improvement suggestions.

5

Published or Conceptual Track

There are no dead ends. Every submission gets a path forward.

If all dimensions score ≥ 2/5 with an average ≥ 3/5, your work is published. Otherwise, it enters the Conceptual Track with a clear improvement roadmap. You can iterate and resubmit at any time.

6

Link Supporting Papers

Frameworks grow stronger as you add supporting evidence.

Link papers to your framework with relationship types: supports, extends, applies, challenges, constrains. Each linked paper's review contributes to the framework's composite score.

7

Timestamped Predictions

Predictions are extracted and locked with immutable timestamps.

Once locked, prediction timestamps cannot be changed. When new data matches a prediction, the record proves when you made the call. Predictions track through statuses: pending → testable → confirmed/falsified.

Publication Threshold

Published Track

  • All dimensions score at least 2/5
  • Overall average score of 3/5 or higher
  • Work appears on public Frameworks/Papers pages
  • 7-day grace period for corrections

Conceptual Track

  • Below the publication threshold
  • Receives detailed improvement roadmap
  • Visible on the Conceptual Track page
  • Can iterate and resubmit at any time

The AI's recommendation (“approve”/“revise”/“reject”) is advisory and displayed to the author, but publication is determined by the numerical scores.

The 7 Review Dimensions

Each dimension is scored 1-5 with a detailed explanation. Here's what each one measures and how to improve your scores.

Internal Consistency
What it measures: Do your claims contradict each other? Are assumptions consistent with conclusions?
How to improve: Review each section for logical flow. Have someone else read for contradictions.
Mathematical Validity
What it measures: Are equations correctly derived? Is the notation sound?
How to improve: Write a supporting paper with step-by-step derivations. Double-check all math.
Falsifiability
What it measures: Does the work make specific, testable predictions?
How to improve: Add quantitative predictions with clear falsification conditions.
Clarity
What it measures: Can a physicist in a related field follow the reasoning?
How to improve: Use clear section headings, define all terms, and build arguments step by step.
Novelty
What it measures: Does this offer something genuinely new?
How to improve: Explicitly reference prior work and articulate how yours differs.
Completeness
What it measures: Are boundary conditions, limitations, and edge cases addressed?
How to improve: Write supporting papers to fill gaps. Acknowledge limitations explicitly.
Evidence Strength
What it measures: How well do supporting papers back the framework's claims? (Frameworks only)
How to improve: Link supporting papers with clear relationship types: supports, extends, applies.

How Linking Papers Changes Scores

A framework reviewed alone gets 6 dimension scores. When you link supporting papers, the review becomes a composite evaluation of the entire body of work.

6

Dimensions for standalone papers

7

Dimensions for frameworks (adds Evidence Strength)

Papers can be linked, improving composite scores

The AI reviews the framework document plus all linked papers together. It evaluates consistency across papers, identifies gaps, and suggests additional papers that would strengthen the framework. Each linked paper makes the composite score more nuanced and credible.