Guide
Frameworks on TOE-Share
A framework is the big idea. It defines the assumptions, the math, and the predictions. Papers support it, extend it, or challenge it. Here's everything you need to know.
What Is a Framework?
A framework on TOE-Share is a foundational theoretical model. It defines core assumptions, mathematical structure, and a landscape of predictions. Think of it as the overarching “big idea” that individual papers support, extend, or challenge.
Frameworks aren't limited to any single discipline or paradigm. Whether you're proposing a unified field theory, a new interpretation of quantum mechanics, or an information-geometric model of consciousness, the platform evaluates the rigor of your theory — not whether it matches mainstream consensus.
How Frameworks Differ from Papers
Framework
- ◆Defines the entire prediction landscape
- ◆States foundational assumptions and axioms
- ◆Provides overarching mathematical structure
- ◆Grows stronger as supporting papers are linked
- ◆Reviewed on 7 dimensions including Evidence Strength
Paper
- ◇Focuses on a specific research contribution
- ◇Tests one or a few predictions
- ◇Provides derivations, proofs, or data analysis
- ◇Can be linked to one or more frameworks
- ◇Reviewed on 6 dimensions (no Evidence Strength)
A paper might test one prediction of a framework. The framework defines the entire prediction landscape that the paper operates within. Together, they form a body of evidence — and TOE-Share's composite scoring reflects that.
The Framework Lifecycle
Draft
Write your framework content in Markdown, LaTeX, or plain text. Describe core assumptions, mathematical structure, and predictions. Upload via the editor or drag-and-drop a file.
AI Review
A multi-agent AI system evaluates your framework across 7 scientific dimensions: clarity, internal consistency, mathematical rigor, empirical grounding, novelty, testability, and literature engagement. Each dimension receives a 1–5 score with detailed feedback.
Published or Conceptual
If every dimension scores ≥ 2/5 and the overall average is ≥ 3/5, your framework is published. If it falls below the threshold, it enters the Conceptual Track — you receive a detailed improvement roadmap and your work remains visible while you strengthen it.
Iterate
Revise and resubmit at any time. Link supporting papers, refine your math, tighten your predictions. Each revision is reviewed fresh, and your version history tracks the journey.
The 7 Review Dimensions
Every framework is evaluated across these dimensions by a multi-agent AI system. Each dimension receives a 1–5 score with detailed explanations, strengths, and improvement suggestions.
Clarity
Can a physicist in a related field follow the reasoning?
Internal Consistency
Do all claims, assumptions, and conclusions cohere without contradiction?
Mathematical Rigor
Are equations correctly derived and notation well-defined?
Empirical Grounding
Does the framework connect to observable phenomena and existing data?
Novelty
Does this contribute something genuinely new to the field?
Testability
Are there specific, falsifiable predictions with clear conditions?
Literature Engagement
Does the framework situate itself within existing research?
For a deeper breakdown of each dimension, use the Rigor Guides or see the full walkthrough.
Linking Papers to Frameworks
Papers can be linked to a framework with a declared relationship type:
When papers are linked, the AI reviews the framework and all its linked papers together, producing a composite score that reflects the full body of evidence. Each linked paper makes the evaluation more nuanced and credible.
If a linked paper is updated or new papers are added, the framework may benefit from re-review to reflect the strengthened (or challenged) evidence base.
Paradigm Neutrality
TOE-Share does not enforce adherence to mainstream consensus. Instead, frameworks are evaluated on internal logical consistency within their stated assumptions.
When you submit a framework, you can declare your foundational assumptions — the axioms your theory builds upon. The AI reviewers evaluate rigor within those boundaries rather than penalizing departures from established paradigms.
This means a framework that challenges standard model assumptions isn't automatically penalized. It means a novel interpretation of quantum mechanics gets evaluated on whether the math works, the predictions are testable, and the logic is sound — not on whether it aligns with the prevailing view.
Maturity Levels
Every framework on TOE-Share is assigned a maturity level that reflects how far it has progressed from initial idea to empirically tested theory.
Conceptual
A qualitative idea or hypothesis without formal mathematical formulation. Often the starting point for a new theoretical direction.
Structured
Core assumptions are clearly stated and relationships between concepts are defined, but detailed derivations or predictions may still be in progress.
Predictive
The framework produces specific, testable predictions with falsification conditions. Mathematical structure is in place and the theory can be compared against observation.
Tested
At least some predictions have been compared against empirical data. Confirmed or falsified predictions are on record with immutable timestamps.
Ready to Get Started?
Whether you have a fully developed theory or an early-stage concept, TOE-Share gives you a transparent path from idea to published, peer-evaluated framework.