All Models Are the Same Model

A structural view on elementary functions, symbolic AI, and the illusion of diversity

This article reflects ideas from the recently published Paper
All elementary functions from a single binary operator by Andrzej Odrzywołek

The intuition we rarely question

In applied work – whether in physics, data science, or economic modeling – we operate with a broad toolbox:

  • exponentials
  • logarithms
  • polynomials
  • trigonometric functions

They appear as fundamentally different objects.

Different behaviors.
Different interpretations.
Different domains of application.

This diversity feels natural.

It is also misleading.

The structural collapse

A recent result shows that all elementary functions can be generated from a single operator:

    $$
    eml(x,y) = \exp(x) – \ln(y)
    $$

    Together with a single constant, this is sufficient to construct:

    • addition
    • multiplication
    • powers
    • trigonometric functions
    • inverse functions

    In other words:

    The entire space of elementary mathematics can be expressed as recursive compositions of one operator.

    This is not a computational trick.

    It is a structural statement.

    From functions to trees

    The implication is immediate:

      Every function becomes a tree.

      A function is no longer:

      an equation
      a formula
      a symbolic expression

      It is:

      A recursive composition of identical nodes.

      Formally:

      $$
      S→1∣eml(S,S)
      $$
      This places elementary mathematics into the same category as:

      • Boolean logic with NAND
      • lambda calculus
      • recursive grammars

      The diversity of functions is reduced to:

      Differences in tree structure.

      What this actually means

      This result does not introduce new mathematics.

        It removes structure we assumed to be fundamental.

        The distinction between:

        • exponential growth
        • oscillation
        • polynomial scaling

        is not ontological.

        It is representational.

        These are not different “types” of behavior.

        They are:

        Different coordinate projections of the same generative process.

        The consequence for modeling

        This has direct implications for how models should be understood.

          In practice, we often ask:

          Which functional form fits the data?

          Should we use exponential, polynomial, or logistic models?

          This question is ill-posed.

          Because:

          There is no fundamental difference between these classes.

          The real question is:

          Which structure of composition captures the system?

          Symbolic regression changes character

          Most approaches to symbolic regression treat the problem as:

            a discrete search over expressions

            a combinatorial optimization problem

            The eml formulation changes this.

            It defines:

            A continuous, differentiable space of all elementary functions.

            This enables:

            gradient-based optimization

            unified model spaces

            direct mapping from data to closed-form expressions

            Not approximately.
            Not numerically.
            But structurally.

            Why this matters for AI

            Modern machine learning operates in a paradox:

              Neural networks are expressive but opaque

              Symbolic models are interpretable but brittle

              This approach collapses that distinction.

              It provides:

              A continuous symbolic space that is both expressive and interpretable.

              This is not a minor technical improvement.

              It is a shift in representation.

              The deeper implication

              The important point is not the operator.

                It is what disappears:

                the distinction between function classes

                the need for handcrafted functional assumptions

                the separation between symbolic and numerical modeling

                What remains is:

                A unified generative structure.

                From a systems perspective, this suggests:

                Models are not collections of functions

                Models are instantiations of a single generative mechanism

                A note on limitations


                This representation is minimal, not efficient.

                  Like NAND in logic:

                  it is universal

                  but not practical as a direct implementation

                  Tree depth grows quickly.
                  Numerical stability becomes an issue.

                  This is not a replacement for standard modeling.

                  It is a reframing.

                  Conclusion

                  The result is simple to state:

                  All elementary functions share the same generative structure.

                  What changes is not what we can compute.

                  What changes is how we understand models.

                  Not as a collection of tools.

                  But as:

                  Different structural realizations of a single underlying construction process.