What definitions were crucial to further understanding?












46












$begingroup$


Often the most difficult part of venturing into a field as a researcher is to come up with an appropriate definition. Sometimes definitions suggest themselves very naturally, as when you solve a problem and then ask, ‘What if I generalize this a bit?’



Other times they arise only after struggling with a subject and realizing you were looking at it from the wrong angle. An appropriate definition can then make all the difference, by reorganizing the thought and sheding light into the problems, somehow making them sharper and more focused.



I would like to collect evidences and instances of this idea. An answer should be a story of how someone came up with a good definition and how this was crucial to their understanding of a topic. If you talk about someone else, then ideally provide a reference.



(My interest in this is mainly psychological, namely, how the action of naming something somehow brings it into existence and organizes the world around it.)



edit



It was suggested that this question is a duplicate of (Examples of advance via good definitions), which asks about good definitions. It was pointed out there, and also here, that basicaly all notions that stood the test of time qualify as good definitions.



I was not asking, although it seems many people construed it this way, for a collection of good definitions, but for a collection of stories that showed how a proper definition actually changed the perception about a field.



A typical such story would have someone saying "Wait a moment! I should not be dealing with [concept A] at all! That's the wrong way to approach this. Instead I should define this other guy, [concept B], and then everything will make a lot more sense!"



I realize this is hard to make precise, so I understand if the question gets closed.










share|cite|improve this question











$endgroup$








  • 12




    $begingroup$
    There is the famous example of Grothendieck’s definition of a scheme.
    $endgroup$
    – Sam Hopkins
    Dec 3 '18 at 14:30






  • 4




    $begingroup$
    The definition of a site (Grothendieck realising that one only needs the notion of a covering to define cohomology) and sheaves on them leading to étale cohomology (and other cohomology theories).
    $endgroup$
    – TKe
    Dec 3 '18 at 15:23








  • 5




    $begingroup$
    I don't really know the history, but it seems to me the (re)definition of "continuous function" as "pullback of open is open" must have been what allowed for the definition of a topology.
    $endgroup$
    – benblumsmith
    Dec 3 '18 at 16:03






  • 7




    $begingroup$
    Seems like it would be harder to find examples of well-known definitions that weren't crucial to further understanding.
    $endgroup$
    – Nik Weaver
    Dec 3 '18 at 16:25






  • 4




    $begingroup$
    @Marcel: sure, that's why I qualified it with "well-known". Was the definition of an abstract group crucial to further understanding? The definition of a manifold? Banach space, topological space, probability measure? Well, of course.
    $endgroup$
    – Nik Weaver
    Dec 3 '18 at 17:15
















46












$begingroup$


Often the most difficult part of venturing into a field as a researcher is to come up with an appropriate definition. Sometimes definitions suggest themselves very naturally, as when you solve a problem and then ask, ‘What if I generalize this a bit?’



Other times they arise only after struggling with a subject and realizing you were looking at it from the wrong angle. An appropriate definition can then make all the difference, by reorganizing the thought and sheding light into the problems, somehow making them sharper and more focused.



I would like to collect evidences and instances of this idea. An answer should be a story of how someone came up with a good definition and how this was crucial to their understanding of a topic. If you talk about someone else, then ideally provide a reference.



(My interest in this is mainly psychological, namely, how the action of naming something somehow brings it into existence and organizes the world around it.)



edit



It was suggested that this question is a duplicate of (Examples of advance via good definitions), which asks about good definitions. It was pointed out there, and also here, that basicaly all notions that stood the test of time qualify as good definitions.



I was not asking, although it seems many people construed it this way, for a collection of good definitions, but for a collection of stories that showed how a proper definition actually changed the perception about a field.



A typical such story would have someone saying "Wait a moment! I should not be dealing with [concept A] at all! That's the wrong way to approach this. Instead I should define this other guy, [concept B], and then everything will make a lot more sense!"



I realize this is hard to make precise, so I understand if the question gets closed.










share|cite|improve this question











$endgroup$








  • 12




    $begingroup$
    There is the famous example of Grothendieck’s definition of a scheme.
    $endgroup$
    – Sam Hopkins
    Dec 3 '18 at 14:30






  • 4




    $begingroup$
    The definition of a site (Grothendieck realising that one only needs the notion of a covering to define cohomology) and sheaves on them leading to étale cohomology (and other cohomology theories).
    $endgroup$
    – TKe
    Dec 3 '18 at 15:23








  • 5




    $begingroup$
    I don't really know the history, but it seems to me the (re)definition of "continuous function" as "pullback of open is open" must have been what allowed for the definition of a topology.
    $endgroup$
    – benblumsmith
    Dec 3 '18 at 16:03






  • 7




    $begingroup$
    Seems like it would be harder to find examples of well-known definitions that weren't crucial to further understanding.
    $endgroup$
    – Nik Weaver
    Dec 3 '18 at 16:25






  • 4




    $begingroup$
    @Marcel: sure, that's why I qualified it with "well-known". Was the definition of an abstract group crucial to further understanding? The definition of a manifold? Banach space, topological space, probability measure? Well, of course.
    $endgroup$
    – Nik Weaver
    Dec 3 '18 at 17:15














46












46








46


23



$begingroup$


Often the most difficult part of venturing into a field as a researcher is to come up with an appropriate definition. Sometimes definitions suggest themselves very naturally, as when you solve a problem and then ask, ‘What if I generalize this a bit?’



Other times they arise only after struggling with a subject and realizing you were looking at it from the wrong angle. An appropriate definition can then make all the difference, by reorganizing the thought and sheding light into the problems, somehow making them sharper and more focused.



I would like to collect evidences and instances of this idea. An answer should be a story of how someone came up with a good definition and how this was crucial to their understanding of a topic. If you talk about someone else, then ideally provide a reference.



(My interest in this is mainly psychological, namely, how the action of naming something somehow brings it into existence and organizes the world around it.)



edit



It was suggested that this question is a duplicate of (Examples of advance via good definitions), which asks about good definitions. It was pointed out there, and also here, that basicaly all notions that stood the test of time qualify as good definitions.



I was not asking, although it seems many people construed it this way, for a collection of good definitions, but for a collection of stories that showed how a proper definition actually changed the perception about a field.



A typical such story would have someone saying "Wait a moment! I should not be dealing with [concept A] at all! That's the wrong way to approach this. Instead I should define this other guy, [concept B], and then everything will make a lot more sense!"



I realize this is hard to make precise, so I understand if the question gets closed.










share|cite|improve this question











$endgroup$




Often the most difficult part of venturing into a field as a researcher is to come up with an appropriate definition. Sometimes definitions suggest themselves very naturally, as when you solve a problem and then ask, ‘What if I generalize this a bit?’



Other times they arise only after struggling with a subject and realizing you were looking at it from the wrong angle. An appropriate definition can then make all the difference, by reorganizing the thought and sheding light into the problems, somehow making them sharper and more focused.



I would like to collect evidences and instances of this idea. An answer should be a story of how someone came up with a good definition and how this was crucial to their understanding of a topic. If you talk about someone else, then ideally provide a reference.



(My interest in this is mainly psychological, namely, how the action of naming something somehow brings it into existence and organizes the world around it.)



edit



It was suggested that this question is a duplicate of (Examples of advance via good definitions), which asks about good definitions. It was pointed out there, and also here, that basicaly all notions that stood the test of time qualify as good definitions.



I was not asking, although it seems many people construed it this way, for a collection of good definitions, but for a collection of stories that showed how a proper definition actually changed the perception about a field.



A typical such story would have someone saying "Wait a moment! I should not be dealing with [concept A] at all! That's the wrong way to approach this. Instead I should define this other guy, [concept B], and then everything will make a lot more sense!"



I realize this is hard to make precise, so I understand if the question gets closed.







soft-question ho.history-overview big-list definitions






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 5 '18 at 7:33


























community wiki





4 revs, 3 users 88%
Marcel









  • 12




    $begingroup$
    There is the famous example of Grothendieck’s definition of a scheme.
    $endgroup$
    – Sam Hopkins
    Dec 3 '18 at 14:30






  • 4




    $begingroup$
    The definition of a site (Grothendieck realising that one only needs the notion of a covering to define cohomology) and sheaves on them leading to étale cohomology (and other cohomology theories).
    $endgroup$
    – TKe
    Dec 3 '18 at 15:23








  • 5




    $begingroup$
    I don't really know the history, but it seems to me the (re)definition of "continuous function" as "pullback of open is open" must have been what allowed for the definition of a topology.
    $endgroup$
    – benblumsmith
    Dec 3 '18 at 16:03






  • 7




    $begingroup$
    Seems like it would be harder to find examples of well-known definitions that weren't crucial to further understanding.
    $endgroup$
    – Nik Weaver
    Dec 3 '18 at 16:25






  • 4




    $begingroup$
    @Marcel: sure, that's why I qualified it with "well-known". Was the definition of an abstract group crucial to further understanding? The definition of a manifold? Banach space, topological space, probability measure? Well, of course.
    $endgroup$
    – Nik Weaver
    Dec 3 '18 at 17:15














  • 12




    $begingroup$
    There is the famous example of Grothendieck’s definition of a scheme.
    $endgroup$
    – Sam Hopkins
    Dec 3 '18 at 14:30






  • 4




    $begingroup$
    The definition of a site (Grothendieck realising that one only needs the notion of a covering to define cohomology) and sheaves on them leading to étale cohomology (and other cohomology theories).
    $endgroup$
    – TKe
    Dec 3 '18 at 15:23








  • 5




    $begingroup$
    I don't really know the history, but it seems to me the (re)definition of "continuous function" as "pullback of open is open" must have been what allowed for the definition of a topology.
    $endgroup$
    – benblumsmith
    Dec 3 '18 at 16:03






  • 7




    $begingroup$
    Seems like it would be harder to find examples of well-known definitions that weren't crucial to further understanding.
    $endgroup$
    – Nik Weaver
    Dec 3 '18 at 16:25






  • 4




    $begingroup$
    @Marcel: sure, that's why I qualified it with "well-known". Was the definition of an abstract group crucial to further understanding? The definition of a manifold? Banach space, topological space, probability measure? Well, of course.
    $endgroup$
    – Nik Weaver
    Dec 3 '18 at 17:15








12




12




$begingroup$
There is the famous example of Grothendieck’s definition of a scheme.
$endgroup$
– Sam Hopkins
Dec 3 '18 at 14:30




$begingroup$
There is the famous example of Grothendieck’s definition of a scheme.
$endgroup$
– Sam Hopkins
Dec 3 '18 at 14:30




4




4




$begingroup$
The definition of a site (Grothendieck realising that one only needs the notion of a covering to define cohomology) and sheaves on them leading to étale cohomology (and other cohomology theories).
$endgroup$
– TKe
Dec 3 '18 at 15:23






$begingroup$
The definition of a site (Grothendieck realising that one only needs the notion of a covering to define cohomology) and sheaves on them leading to étale cohomology (and other cohomology theories).
$endgroup$
– TKe
Dec 3 '18 at 15:23






5




5




$begingroup$
I don't really know the history, but it seems to me the (re)definition of "continuous function" as "pullback of open is open" must have been what allowed for the definition of a topology.
$endgroup$
– benblumsmith
Dec 3 '18 at 16:03




$begingroup$
I don't really know the history, but it seems to me the (re)definition of "continuous function" as "pullback of open is open" must have been what allowed for the definition of a topology.
$endgroup$
– benblumsmith
Dec 3 '18 at 16:03




7




7




$begingroup$
Seems like it would be harder to find examples of well-known definitions that weren't crucial to further understanding.
$endgroup$
– Nik Weaver
Dec 3 '18 at 16:25




$begingroup$
Seems like it would be harder to find examples of well-known definitions that weren't crucial to further understanding.
$endgroup$
– Nik Weaver
Dec 3 '18 at 16:25




4




4




$begingroup$
@Marcel: sure, that's why I qualified it with "well-known". Was the definition of an abstract group crucial to further understanding? The definition of a manifold? Banach space, topological space, probability measure? Well, of course.
$endgroup$
– Nik Weaver
Dec 3 '18 at 17:15




$begingroup$
@Marcel: sure, that's why I qualified it with "well-known". Was the definition of an abstract group crucial to further understanding? The definition of a manifold? Banach space, topological space, probability measure? Well, of course.
$endgroup$
– Nik Weaver
Dec 3 '18 at 17:15










11 Answers
11






active

oldest

votes


















39












$begingroup$

A famous definition which led to a completely new point of view is the definition of Schwartz distribution. It changed the understanding of what a "function" is, even among engineers.



Actually, the definition of function by Dirichlet in 19th century also clarified many things.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    @Michael: I mean exactly what I wrote. What is unclear?
    $endgroup$
    – Alexandre Eremenko
    Dec 7 '18 at 19:50



















28












$begingroup$

I am surprised no-one has mentioned Weierstrass's $epsilon,delta$ definition of limits. This enabled mathematicians to rigorously reason about convergence and eliminate numerous apparent inconsistencies.






share|cite|improve this answer











$endgroup$





















    26












    $begingroup$

    A good example comes from the definition of manifolds, although it's less of a single definition and more of an evolving notion.



    The background to modern differential geometry lies in trying understand cases where the parallel postulate fails. However, trying to rigorously pin down what sort of spaces to study is pretty tricky. It's only tangentially related, but there's a famous paper by Lakatos called "Proofs and Refutations" about the Euler characteristic on surfaces which gives insight as to why defining spaces precisely is a non-trivial problem.



    Riemann had an intuitive notion of manifolds, and it's possible to see how his ideas evolved into the modern notion. It seems that Riemann's "Mannigfaltigkeit" had an element of "I know it when I see it." Similarly, Poincare didn't have a modern definition for manifold in Analysis Situs. The modern definition of a smooth manifold as a locally-Euclidean space wasn't introduced until 1912 by Hermann Weyl.
    Even then, it wasn't clear until the Whitney proved the embedding theorem that the intrinsic definition was equivalent to submanifolds of Euclidean space.



    That's still not the end of the story. If one tries to really pin down the definition, one realizes that there are at least three distinct categories of manifolds; smooth, piece-wise linear, and topological manifolds. The fact that these different types are not equivalent gives rise to a lot of research that is still continuing today.






    share|cite|improve this answer











    $endgroup$









    • 3




      $begingroup$
      "tangentially related"
      $endgroup$
      – JP McCarthy
      Dec 6 '18 at 13:03



















    23












    $begingroup$

    In set theory, definitely the notion of a Woodin cardinal.



    First, it is not an entirely straightforward notion to guess. Significant large cardinals were up to that point defined as critical points of certain elementary embeddings. This is not the case here: Woodin cardinals need not be measurable. If $kappa$ is Woodin, then $V_kappa$ is a model of set theory where there is a proper class of strong cardinals. Woodinness requires more, namely, that these strong embeddings move some predicates correctly.



    Second, the definition turned out to identify a pivotal point in inner model theory: for notions weaker than Woodinness, the corresponding canonical inner models carry $mathbfDelta^1_3$ well-orderings of the reals. This is no longer true once we have Woodin cardinals. This is closely tied up to the complexity of the comparison process: given two set models that look like initial segments of canonical inner models, how hard is to compare them to tell which one carries more information? For notions weaker than Woodinness, this process is carried out in an essentially linear fashion: disagreements between the models are witnessed by some measures, and repeatedly using these least measures to form ultrapowers eventually lines the models up: One of the iterates ends up as an initial segment of the other, and whichever one is longer comes from the model which originally has more information.With Woodin cardinals and beyond this process is no longer enough. Instead, comparisons sometimes need to retrace steps, and rather than a linear iteration, at the end we have tree-like structures. Identifying these crucial differences allowed us to develop inner model theory beyond this point. This in turn has led to many results and to the identification of deep connections between large cardinals and descriptive set theory. Literally, thanks to the presence of Woodin cardinals, the set-theoretic landscape grew and transformed significantly.



    Third, Woodinness also turns out to be the notion needed to carry out certain forcing constructions. Some consistency results that were not expected now could be established, thanks to the identification of genuinely new forcing notions that use the Woodin cardinals in an essential way. Other constructions that were known from significant large cardinals were improved to their optimal form.



    The story of how the notion of Woodinness was identified is actually quite nice. Kunen had used huge cardinals to prove the consistency of the existence of saturated ideals on $omega_1$. The embeddings witnessing hugeness end up lifting to the generic embeddings witnessing saturation in the appropriate forcing extension. In 1984, Foreman, Magidor and Shelah identified an entirely new way of finding models with saturated ideals. Their construction improved the large cardinal notion needed from hugeness to supercompactness. What is significant is that the generic embedding is no longer a lifting of an old genuine embedding. Indeed, their construction preserves $omega_1$. As a consequence of their results, in May of the same year, Woodin showed that supercompactness implies that all projective sets of reals are Lebesgue measurable (and more). Conversations between Shelah and Woodin quickly led to the realization that much weaker cardinals than supercompact sufficed for this result. Through a series of refinements, the notions of what we now call Shelah and Woodin cardinals were identified, with the latter being of precisely the right strength: all this happened while developments in determinacy and inner model theory showed the deep connections between these fields, and the pivotal role that Woodin cardinals played in this connection. This all happened quite quickly: by the time the Shelah-Woodin paper appeared in print, the importance of Woodin cardinals was already recognized.




    MR1074499 (92m:03087). Shelah, Saharon; Woodin, Hugh. Large cardinals imply that every reasonably definable set of reals is Lebesgue measurable. Israel J. Math. 70 (1990), no. 3, 381–394.







    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      Those were some exciting times in set theory!
      $endgroup$
      – Asaf Karagila
      Dec 4 '18 at 15:37



















    19












    $begingroup$

    I would nominate the definition of probability theory in terms of measure theory, where a probability space is an abstract measure space, an event is a measurable subset, a random variable is a measurable function, and expectation is the Lebesgue integral. This is usually credited to a 1933 paper by Kolmogorov, though it seems that many of the ideas had previously been around. Here is a very interesting survey by Shafer and Vovk, entitled "The origins and legacy of Kolmogorov's Grundbegriffe."



    People had been thinking about probability for a long time before that, but these definitions made it possible to place everything on a rigorous footing and exploit many key results from measure theory. Shafer and Vovk say that it took probability from a mathematical "pastime" to become a respected branch of pure mathematics.



    (I am no expert historian so please feel free to edit with corrections, more background, further discussion, etc.)






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      From the words of my probability professor: "there are only three letters in probability, $Omega$, $mathcal{A}$, and $mathbb{P}$".
      $endgroup$
      – Najib Idrissi
      Dec 4 '18 at 17:59










    • $begingroup$
      @NajibIdrissi : There's also $X$ as in $X:Omegatomathbb R,$ and $operatorname E$ as in $operatorname E X = int_Omega X, dP. qquad$
      $endgroup$
      – Michael Hardy
      Dec 4 '18 at 23:01






    • 4




      $begingroup$
      I am convinced that Kolmogorov was not God's last prophet, and his definitions need to be kept in a certain limited context while others advance beyond them.
      $endgroup$
      – Michael Hardy
      Dec 4 '18 at 23:02










    • $begingroup$
      "At a purely formal level, one could call probability theory the study of measure spaces with total measure one, but that would be like calling number theory the study of strings of digits which terminate." (Terry Tao)
      $endgroup$
      – Michael
      Dec 7 '18 at 16:46










    • $begingroup$
      @Michael: Following that analogy, Kolmogorov's axiomatization is like the invention of decimal notation. It isn't necessarily the only way to think about numbers, and it's possible to focus too much on the formalism and lose sight of the bigger ideas, but nonetheless it's a powerful tool and a major advance.
      $endgroup$
      – Nate Eldredge
      Dec 7 '18 at 21:03



















    19












    $begingroup$

    My nominee is the definition of a topology. I don't know the history of the subject but my impression is that this was the result of a collective effort. The definition with collections of subsets closed under finite intersections and arbitrary unions is not the kind of thing one would get up one morning and decide to consider. It seems to me this was an example of category where the definition of morphisms was clear (continuous maps) but where finding the right notion of object was not obvious.



    Perhaps a sign of a good definition is that it leads to other good definitions. Those could arise in order to remedy some flaws of the original one, like Grothendieck's topos. They could also arise as particular cases of the original definition like Schwartz's definition of the topology on $mathcal{D}$ underlying the notion of distribution mentioned in Alexandre's answer.






    share|cite|improve this answer











    $endgroup$





















      16












      $begingroup$

      In the comments I offered the "classical" example of Grothendieck's definition of a scheme. But for a more modern example: Fomin and Zelevinsky's definition of a cluster algebra is at first sight a rather ungainly thing, but turned out to be exactly what was needed to unify certain patterns that appeared in various hitherto unrelated areas of mathematics like discrete integrable systems, the representation theory of associative algebras, Poisson/symplectic geometry, etc.






      share|cite|improve this answer











      $endgroup$





















        9












        $begingroup$


        My interest in this is mainly psychological, namely, how the action of naming something somehow brings it into existence and organizes the world around it.




        The power of naming...



        J.-M. Souriau wrote a book meant to take anyone from the crib to relativistic cosmology and quantum statistics — that he unfortunately (predictably?) never got to really finish, but the draft is online. He starts by saying that all babies are (of course) natural born mathematicians, and everyone’s “first and sensual” abstraction is when they name that recurring warm blob “mommy”.



        Surely that qualifies as “crucial to further understanding”; he argues that half math is more of the same, and tells stories illustrating the power of naming numbers, points, the unknown $(x)$, maps, variables, determinants, groups, morphisms, cohomologies... all would qualify as answers. It’s not just about coming up with epoch-making definitions, either: teaching students to define and name variables in their problems is also what we do.



        We’ve all seen quotes to the effect that it’s all in the definitions. “Success stories” in that are largely the history of math, or at least one way to look at it. Specific recent ones that likely inspired the book (and made it to the MSC) would be symplectic manifold (1950), symplectic geometry (1953; earlier it was this), and moment(um) map (1967).



        (I agree with others that this is a near-duplicate, and will transfer my answer there if needed.)






        share|cite|improve this answer











        $endgroup$









        • 2




          $begingroup$
          I've heard Noam Chomsky theorize in the reverse direction: That the need to count inductively helped evolve recursive brain structures that were then repurposed for language.
          $endgroup$
          – Daniel R. Collins
          Dec 4 '18 at 5:41






        • 1




          $begingroup$
          This is perhaps restating Heraclitus Logos, a concept considered so important in its day it was made the opening passage of the Christian Bible.
          $endgroup$
          – user334732
          Dec 5 '18 at 5:49






        • 2




          $begingroup$
          One might also reflect upon the critical role of words in annotating things, which is itself integral to understanding. Since once a concept is annotated we can begin to annotate its properties and then no longer need to hold the full concept in our mind, rather the salient parts thereof, simplifying the process of thinking about the same and making progress possible where the degree of complexity would otherwise be too great.
          $endgroup$
          – user334732
          Dec 5 '18 at 6:02





















        5












        $begingroup$

        Well, this is basically the same answer as the one by Alexandre Eremenko, but here goes: The particular form of the definition of derivative is crucial for partial differential equations. Using weaker notions than classical derivatives leads to a whole new theory of solutions. While this is (part of) what the answer about distributions is about, the particular case of weak derivatives due to Sobolev predates distributions and is of prime importance to the field of PDEs. Also, weak derivatives allow for a rich theory of function spaces (Banach and Hilbert ones).






        share|cite|improve this answer











        $endgroup$





















          4












          $begingroup$

          I want to add the notion of Tree-Width to the list.



          Roughly speaking, tree-width is a measure of how less tree-like a graph is. A tree has tree-width 1 while a $k$-clique has tree-width $k$. Tree-width has been used as a parameter for the analysis of a number of graph algorithms.



          I think like every other clever definition, the definition of tree-width is something that makes a lot of connections clearer once we internalize it and yet, the definition is not something that is easy to come up with.



          For reference,




          A tree decomposition of a graph $G = (V, E)$ is a tree, $T$, with nodes
          $X_1$, ..., $X_n$, where each $X_i$ is a subset of $V$, satisfying the following
          properties (the term node is used to refer to a vertex of $T$ to avoid
          confusion with vertices of $G$):




          1. The union of all sets $X_i$ equals $V$. That is, each graph vertex is contained in at least one tree node.


          2. If $X_i$ and $X_j$ both contain a vertex $v$, then all nodes $X_k$ of $T$ in the (unique) path between $X_i$ and $X_j$ contain $v$ as well. Equivalently, the
            tree nodes containing vertex $v$ form a connected subtree of $T$.


          3. For every edge $(v, w)$ in the graph, there is a subset $X_i$ that contains both $v$ and $w$. That is, vertices are adjacent in the graph
            only when the corresponding subtrees have a node in common.


          The width of a tree decomposition is the size of its largest set $X_i$
          minus one. The treewidth $mathrm{tw}(G)$ of a graph $G$ is the minimum width among
          all possible tree decompositions of $G$.







          share|cite|improve this answer











          $endgroup$





















            -3












            $begingroup$

            Euclid did not have real numbers. He could say that the length of one line segment goes into the length of another a certain number of times, or that the ratio of lengths of two segments was the same as that of two specified numbers ("numbers" were $2,3,4,5,...,$ and in particular $1$ was not a "number") or that the ratio of the length of the diagonal of a square to the side was not the ratio of any two numbers.



            There is a much celebrated essay by James Wilkinson titled The Perfidious Polynomial that I have not brought myself to read beyond the first couple of pages because of the profound stupidity of what Wilkinson claims is the history of the development of number systems. He states with a straight face that negative numbers were introduced for the purpose of solving polynomial equations that had no positive solutions, that rational numbers were introduced after that for the purpose of solving yet more equations, that irrational numbers were then introduced for solving yet more, and finally complex numbers for the purpose of solving quadratic equations that lacked real solutions. Did Wilkinson never hear that Euclid proved the statement above about the ratio of diagonal to side of a square, but Euclid had never heard of negative numbers, and did not even have anything that we would consider a system of numbers that includes positive rational and irrational numbers? That imaginary numbers were used to find real solutions of third-degree polynomials? That negative numbers were introduced by accountants for their purposes? And that all of that obviously makes more sense and is more elegant and intellectually satisfying than the childish story he tells? In order to do what Wilkinson says was done, one would need certain definitions that were introduced only much later.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              I quite like an (deliberately) ahistoric treatment of these number sets...
              $endgroup$
              – JP McCarthy
              Dec 6 '18 at 13:07






            • 7




              $begingroup$
              I don't think that a rant about an essay whose first two pages you don't like qualifies as an answer to the question.
              $endgroup$
              – Gerry Myerson
              Dec 6 '18 at 21:10











            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "504"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathoverflow.net%2fquestions%2f316785%2fwhat-definitions-were-crucial-to-further-understanding%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            11 Answers
            11






            active

            oldest

            votes








            11 Answers
            11






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            39












            $begingroup$

            A famous definition which led to a completely new point of view is the definition of Schwartz distribution. It changed the understanding of what a "function" is, even among engineers.



            Actually, the definition of function by Dirichlet in 19th century also clarified many things.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              @Michael: I mean exactly what I wrote. What is unclear?
              $endgroup$
              – Alexandre Eremenko
              Dec 7 '18 at 19:50
















            39












            $begingroup$

            A famous definition which led to a completely new point of view is the definition of Schwartz distribution. It changed the understanding of what a "function" is, even among engineers.



            Actually, the definition of function by Dirichlet in 19th century also clarified many things.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              @Michael: I mean exactly what I wrote. What is unclear?
              $endgroup$
              – Alexandre Eremenko
              Dec 7 '18 at 19:50














            39












            39








            39





            $begingroup$

            A famous definition which led to a completely new point of view is the definition of Schwartz distribution. It changed the understanding of what a "function" is, even among engineers.



            Actually, the definition of function by Dirichlet in 19th century also clarified many things.






            share|cite|improve this answer











            $endgroup$



            A famous definition which led to a completely new point of view is the definition of Schwartz distribution. It changed the understanding of what a "function" is, even among engineers.



            Actually, the definition of function by Dirichlet in 19th century also clarified many things.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Dec 3 '18 at 18:00


























            community wiki





            2 revs
            Alexandre Eremenko













            • $begingroup$
              @Michael: I mean exactly what I wrote. What is unclear?
              $endgroup$
              – Alexandre Eremenko
              Dec 7 '18 at 19:50


















            • $begingroup$
              @Michael: I mean exactly what I wrote. What is unclear?
              $endgroup$
              – Alexandre Eremenko
              Dec 7 '18 at 19:50
















            $begingroup$
            @Michael: I mean exactly what I wrote. What is unclear?
            $endgroup$
            – Alexandre Eremenko
            Dec 7 '18 at 19:50




            $begingroup$
            @Michael: I mean exactly what I wrote. What is unclear?
            $endgroup$
            – Alexandre Eremenko
            Dec 7 '18 at 19:50











            28












            $begingroup$

            I am surprised no-one has mentioned Weierstrass's $epsilon,delta$ definition of limits. This enabled mathematicians to rigorously reason about convergence and eliminate numerous apparent inconsistencies.






            share|cite|improve this answer











            $endgroup$


















              28












              $begingroup$

              I am surprised no-one has mentioned Weierstrass's $epsilon,delta$ definition of limits. This enabled mathematicians to rigorously reason about convergence and eliminate numerous apparent inconsistencies.






              share|cite|improve this answer











              $endgroup$
















                28












                28








                28





                $begingroup$

                I am surprised no-one has mentioned Weierstrass's $epsilon,delta$ definition of limits. This enabled mathematicians to rigorously reason about convergence and eliminate numerous apparent inconsistencies.






                share|cite|improve this answer











                $endgroup$



                I am surprised no-one has mentioned Weierstrass's $epsilon,delta$ definition of limits. This enabled mathematicians to rigorously reason about convergence and eliminate numerous apparent inconsistencies.







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                answered Dec 4 '18 at 19:52


























                community wiki





                Aryeh Kontorovich
























                    26












                    $begingroup$

                    A good example comes from the definition of manifolds, although it's less of a single definition and more of an evolving notion.



                    The background to modern differential geometry lies in trying understand cases where the parallel postulate fails. However, trying to rigorously pin down what sort of spaces to study is pretty tricky. It's only tangentially related, but there's a famous paper by Lakatos called "Proofs and Refutations" about the Euler characteristic on surfaces which gives insight as to why defining spaces precisely is a non-trivial problem.



                    Riemann had an intuitive notion of manifolds, and it's possible to see how his ideas evolved into the modern notion. It seems that Riemann's "Mannigfaltigkeit" had an element of "I know it when I see it." Similarly, Poincare didn't have a modern definition for manifold in Analysis Situs. The modern definition of a smooth manifold as a locally-Euclidean space wasn't introduced until 1912 by Hermann Weyl.
                    Even then, it wasn't clear until the Whitney proved the embedding theorem that the intrinsic definition was equivalent to submanifolds of Euclidean space.



                    That's still not the end of the story. If one tries to really pin down the definition, one realizes that there are at least three distinct categories of manifolds; smooth, piece-wise linear, and topological manifolds. The fact that these different types are not equivalent gives rise to a lot of research that is still continuing today.






                    share|cite|improve this answer











                    $endgroup$









                    • 3




                      $begingroup$
                      "tangentially related"
                      $endgroup$
                      – JP McCarthy
                      Dec 6 '18 at 13:03
















                    26












                    $begingroup$

                    A good example comes from the definition of manifolds, although it's less of a single definition and more of an evolving notion.



                    The background to modern differential geometry lies in trying understand cases where the parallel postulate fails. However, trying to rigorously pin down what sort of spaces to study is pretty tricky. It's only tangentially related, but there's a famous paper by Lakatos called "Proofs and Refutations" about the Euler characteristic on surfaces which gives insight as to why defining spaces precisely is a non-trivial problem.



                    Riemann had an intuitive notion of manifolds, and it's possible to see how his ideas evolved into the modern notion. It seems that Riemann's "Mannigfaltigkeit" had an element of "I know it when I see it." Similarly, Poincare didn't have a modern definition for manifold in Analysis Situs. The modern definition of a smooth manifold as a locally-Euclidean space wasn't introduced until 1912 by Hermann Weyl.
                    Even then, it wasn't clear until the Whitney proved the embedding theorem that the intrinsic definition was equivalent to submanifolds of Euclidean space.



                    That's still not the end of the story. If one tries to really pin down the definition, one realizes that there are at least three distinct categories of manifolds; smooth, piece-wise linear, and topological manifolds. The fact that these different types are not equivalent gives rise to a lot of research that is still continuing today.






                    share|cite|improve this answer











                    $endgroup$









                    • 3




                      $begingroup$
                      "tangentially related"
                      $endgroup$
                      – JP McCarthy
                      Dec 6 '18 at 13:03














                    26












                    26








                    26





                    $begingroup$

                    A good example comes from the definition of manifolds, although it's less of a single definition and more of an evolving notion.



                    The background to modern differential geometry lies in trying understand cases where the parallel postulate fails. However, trying to rigorously pin down what sort of spaces to study is pretty tricky. It's only tangentially related, but there's a famous paper by Lakatos called "Proofs and Refutations" about the Euler characteristic on surfaces which gives insight as to why defining spaces precisely is a non-trivial problem.



                    Riemann had an intuitive notion of manifolds, and it's possible to see how his ideas evolved into the modern notion. It seems that Riemann's "Mannigfaltigkeit" had an element of "I know it when I see it." Similarly, Poincare didn't have a modern definition for manifold in Analysis Situs. The modern definition of a smooth manifold as a locally-Euclidean space wasn't introduced until 1912 by Hermann Weyl.
                    Even then, it wasn't clear until the Whitney proved the embedding theorem that the intrinsic definition was equivalent to submanifolds of Euclidean space.



                    That's still not the end of the story. If one tries to really pin down the definition, one realizes that there are at least three distinct categories of manifolds; smooth, piece-wise linear, and topological manifolds. The fact that these different types are not equivalent gives rise to a lot of research that is still continuing today.






                    share|cite|improve this answer











                    $endgroup$



                    A good example comes from the definition of manifolds, although it's less of a single definition and more of an evolving notion.



                    The background to modern differential geometry lies in trying understand cases where the parallel postulate fails. However, trying to rigorously pin down what sort of spaces to study is pretty tricky. It's only tangentially related, but there's a famous paper by Lakatos called "Proofs and Refutations" about the Euler characteristic on surfaces which gives insight as to why defining spaces precisely is a non-trivial problem.



                    Riemann had an intuitive notion of manifolds, and it's possible to see how his ideas evolved into the modern notion. It seems that Riemann's "Mannigfaltigkeit" had an element of "I know it when I see it." Similarly, Poincare didn't have a modern definition for manifold in Analysis Situs. The modern definition of a smooth manifold as a locally-Euclidean space wasn't introduced until 1912 by Hermann Weyl.
                    Even then, it wasn't clear until the Whitney proved the embedding theorem that the intrinsic definition was equivalent to submanifolds of Euclidean space.



                    That's still not the end of the story. If one tries to really pin down the definition, one realizes that there are at least three distinct categories of manifolds; smooth, piece-wise linear, and topological manifolds. The fact that these different types are not equivalent gives rise to a lot of research that is still continuing today.







                    share|cite|improve this answer














                    share|cite|improve this answer



                    share|cite|improve this answer








                    answered Dec 3 '18 at 17:45


























                    community wiki





                    Gabe K









                    • 3




                      $begingroup$
                      "tangentially related"
                      $endgroup$
                      – JP McCarthy
                      Dec 6 '18 at 13:03














                    • 3




                      $begingroup$
                      "tangentially related"
                      $endgroup$
                      – JP McCarthy
                      Dec 6 '18 at 13:03








                    3




                    3




                    $begingroup$
                    "tangentially related"
                    $endgroup$
                    – JP McCarthy
                    Dec 6 '18 at 13:03




                    $begingroup$
                    "tangentially related"
                    $endgroup$
                    – JP McCarthy
                    Dec 6 '18 at 13:03











                    23












                    $begingroup$

                    In set theory, definitely the notion of a Woodin cardinal.



                    First, it is not an entirely straightforward notion to guess. Significant large cardinals were up to that point defined as critical points of certain elementary embeddings. This is not the case here: Woodin cardinals need not be measurable. If $kappa$ is Woodin, then $V_kappa$ is a model of set theory where there is a proper class of strong cardinals. Woodinness requires more, namely, that these strong embeddings move some predicates correctly.



                    Second, the definition turned out to identify a pivotal point in inner model theory: for notions weaker than Woodinness, the corresponding canonical inner models carry $mathbfDelta^1_3$ well-orderings of the reals. This is no longer true once we have Woodin cardinals. This is closely tied up to the complexity of the comparison process: given two set models that look like initial segments of canonical inner models, how hard is to compare them to tell which one carries more information? For notions weaker than Woodinness, this process is carried out in an essentially linear fashion: disagreements between the models are witnessed by some measures, and repeatedly using these least measures to form ultrapowers eventually lines the models up: One of the iterates ends up as an initial segment of the other, and whichever one is longer comes from the model which originally has more information.With Woodin cardinals and beyond this process is no longer enough. Instead, comparisons sometimes need to retrace steps, and rather than a linear iteration, at the end we have tree-like structures. Identifying these crucial differences allowed us to develop inner model theory beyond this point. This in turn has led to many results and to the identification of deep connections between large cardinals and descriptive set theory. Literally, thanks to the presence of Woodin cardinals, the set-theoretic landscape grew and transformed significantly.



                    Third, Woodinness also turns out to be the notion needed to carry out certain forcing constructions. Some consistency results that were not expected now could be established, thanks to the identification of genuinely new forcing notions that use the Woodin cardinals in an essential way. Other constructions that were known from significant large cardinals were improved to their optimal form.



                    The story of how the notion of Woodinness was identified is actually quite nice. Kunen had used huge cardinals to prove the consistency of the existence of saturated ideals on $omega_1$. The embeddings witnessing hugeness end up lifting to the generic embeddings witnessing saturation in the appropriate forcing extension. In 1984, Foreman, Magidor and Shelah identified an entirely new way of finding models with saturated ideals. Their construction improved the large cardinal notion needed from hugeness to supercompactness. What is significant is that the generic embedding is no longer a lifting of an old genuine embedding. Indeed, their construction preserves $omega_1$. As a consequence of their results, in May of the same year, Woodin showed that supercompactness implies that all projective sets of reals are Lebesgue measurable (and more). Conversations between Shelah and Woodin quickly led to the realization that much weaker cardinals than supercompact sufficed for this result. Through a series of refinements, the notions of what we now call Shelah and Woodin cardinals were identified, with the latter being of precisely the right strength: all this happened while developments in determinacy and inner model theory showed the deep connections between these fields, and the pivotal role that Woodin cardinals played in this connection. This all happened quite quickly: by the time the Shelah-Woodin paper appeared in print, the importance of Woodin cardinals was already recognized.




                    MR1074499 (92m:03087). Shelah, Saharon; Woodin, Hugh. Large cardinals imply that every reasonably definable set of reals is Lebesgue measurable. Israel J. Math. 70 (1990), no. 3, 381–394.







                    share|cite|improve this answer











                    $endgroup$













                    • $begingroup$
                      Those were some exciting times in set theory!
                      $endgroup$
                      – Asaf Karagila
                      Dec 4 '18 at 15:37
















                    23












                    $begingroup$

                    In set theory, definitely the notion of a Woodin cardinal.



                    First, it is not an entirely straightforward notion to guess. Significant large cardinals were up to that point defined as critical points of certain elementary embeddings. This is not the case here: Woodin cardinals need not be measurable. If $kappa$ is Woodin, then $V_kappa$ is a model of set theory where there is a proper class of strong cardinals. Woodinness requires more, namely, that these strong embeddings move some predicates correctly.



                    Second, the definition turned out to identify a pivotal point in inner model theory: for notions weaker than Woodinness, the corresponding canonical inner models carry $mathbfDelta^1_3$ well-orderings of the reals. This is no longer true once we have Woodin cardinals. This is closely tied up to the complexity of the comparison process: given two set models that look like initial segments of canonical inner models, how hard is to compare them to tell which one carries more information? For notions weaker than Woodinness, this process is carried out in an essentially linear fashion: disagreements between the models are witnessed by some measures, and repeatedly using these least measures to form ultrapowers eventually lines the models up: One of the iterates ends up as an initial segment of the other, and whichever one is longer comes from the model which originally has more information.With Woodin cardinals and beyond this process is no longer enough. Instead, comparisons sometimes need to retrace steps, and rather than a linear iteration, at the end we have tree-like structures. Identifying these crucial differences allowed us to develop inner model theory beyond this point. This in turn has led to many results and to the identification of deep connections between large cardinals and descriptive set theory. Literally, thanks to the presence of Woodin cardinals, the set-theoretic landscape grew and transformed significantly.



                    Third, Woodinness also turns out to be the notion needed to carry out certain forcing constructions. Some consistency results that were not expected now could be established, thanks to the identification of genuinely new forcing notions that use the Woodin cardinals in an essential way. Other constructions that were known from significant large cardinals were improved to their optimal form.



                    The story of how the notion of Woodinness was identified is actually quite nice. Kunen had used huge cardinals to prove the consistency of the existence of saturated ideals on $omega_1$. The embeddings witnessing hugeness end up lifting to the generic embeddings witnessing saturation in the appropriate forcing extension. In 1984, Foreman, Magidor and Shelah identified an entirely new way of finding models with saturated ideals. Their construction improved the large cardinal notion needed from hugeness to supercompactness. What is significant is that the generic embedding is no longer a lifting of an old genuine embedding. Indeed, their construction preserves $omega_1$. As a consequence of their results, in May of the same year, Woodin showed that supercompactness implies that all projective sets of reals are Lebesgue measurable (and more). Conversations between Shelah and Woodin quickly led to the realization that much weaker cardinals than supercompact sufficed for this result. Through a series of refinements, the notions of what we now call Shelah and Woodin cardinals were identified, with the latter being of precisely the right strength: all this happened while developments in determinacy and inner model theory showed the deep connections between these fields, and the pivotal role that Woodin cardinals played in this connection. This all happened quite quickly: by the time the Shelah-Woodin paper appeared in print, the importance of Woodin cardinals was already recognized.




                    MR1074499 (92m:03087). Shelah, Saharon; Woodin, Hugh. Large cardinals imply that every reasonably definable set of reals is Lebesgue measurable. Israel J. Math. 70 (1990), no. 3, 381–394.







                    share|cite|improve this answer











                    $endgroup$













                    • $begingroup$
                      Those were some exciting times in set theory!
                      $endgroup$
                      – Asaf Karagila
                      Dec 4 '18 at 15:37














                    23












                    23








                    23





                    $begingroup$

                    In set theory, definitely the notion of a Woodin cardinal.



                    First, it is not an entirely straightforward notion to guess. Significant large cardinals were up to that point defined as critical points of certain elementary embeddings. This is not the case here: Woodin cardinals need not be measurable. If $kappa$ is Woodin, then $V_kappa$ is a model of set theory where there is a proper class of strong cardinals. Woodinness requires more, namely, that these strong embeddings move some predicates correctly.



                    Second, the definition turned out to identify a pivotal point in inner model theory: for notions weaker than Woodinness, the corresponding canonical inner models carry $mathbfDelta^1_3$ well-orderings of the reals. This is no longer true once we have Woodin cardinals. This is closely tied up to the complexity of the comparison process: given two set models that look like initial segments of canonical inner models, how hard is to compare them to tell which one carries more information? For notions weaker than Woodinness, this process is carried out in an essentially linear fashion: disagreements between the models are witnessed by some measures, and repeatedly using these least measures to form ultrapowers eventually lines the models up: One of the iterates ends up as an initial segment of the other, and whichever one is longer comes from the model which originally has more information.With Woodin cardinals and beyond this process is no longer enough. Instead, comparisons sometimes need to retrace steps, and rather than a linear iteration, at the end we have tree-like structures. Identifying these crucial differences allowed us to develop inner model theory beyond this point. This in turn has led to many results and to the identification of deep connections between large cardinals and descriptive set theory. Literally, thanks to the presence of Woodin cardinals, the set-theoretic landscape grew and transformed significantly.



                    Third, Woodinness also turns out to be the notion needed to carry out certain forcing constructions. Some consistency results that were not expected now could be established, thanks to the identification of genuinely new forcing notions that use the Woodin cardinals in an essential way. Other constructions that were known from significant large cardinals were improved to their optimal form.



                    The story of how the notion of Woodinness was identified is actually quite nice. Kunen had used huge cardinals to prove the consistency of the existence of saturated ideals on $omega_1$. The embeddings witnessing hugeness end up lifting to the generic embeddings witnessing saturation in the appropriate forcing extension. In 1984, Foreman, Magidor and Shelah identified an entirely new way of finding models with saturated ideals. Their construction improved the large cardinal notion needed from hugeness to supercompactness. What is significant is that the generic embedding is no longer a lifting of an old genuine embedding. Indeed, their construction preserves $omega_1$. As a consequence of their results, in May of the same year, Woodin showed that supercompactness implies that all projective sets of reals are Lebesgue measurable (and more). Conversations between Shelah and Woodin quickly led to the realization that much weaker cardinals than supercompact sufficed for this result. Through a series of refinements, the notions of what we now call Shelah and Woodin cardinals were identified, with the latter being of precisely the right strength: all this happened while developments in determinacy and inner model theory showed the deep connections between these fields, and the pivotal role that Woodin cardinals played in this connection. This all happened quite quickly: by the time the Shelah-Woodin paper appeared in print, the importance of Woodin cardinals was already recognized.




                    MR1074499 (92m:03087). Shelah, Saharon; Woodin, Hugh. Large cardinals imply that every reasonably definable set of reals is Lebesgue measurable. Israel J. Math. 70 (1990), no. 3, 381–394.







                    share|cite|improve this answer











                    $endgroup$



                    In set theory, definitely the notion of a Woodin cardinal.



                    First, it is not an entirely straightforward notion to guess. Significant large cardinals were up to that point defined as critical points of certain elementary embeddings. This is not the case here: Woodin cardinals need not be measurable. If $kappa$ is Woodin, then $V_kappa$ is a model of set theory where there is a proper class of strong cardinals. Woodinness requires more, namely, that these strong embeddings move some predicates correctly.



                    Second, the definition turned out to identify a pivotal point in inner model theory: for notions weaker than Woodinness, the corresponding canonical inner models carry $mathbfDelta^1_3$ well-orderings of the reals. This is no longer true once we have Woodin cardinals. This is closely tied up to the complexity of the comparison process: given two set models that look like initial segments of canonical inner models, how hard is to compare them to tell which one carries more information? For notions weaker than Woodinness, this process is carried out in an essentially linear fashion: disagreements between the models are witnessed by some measures, and repeatedly using these least measures to form ultrapowers eventually lines the models up: One of the iterates ends up as an initial segment of the other, and whichever one is longer comes from the model which originally has more information.With Woodin cardinals and beyond this process is no longer enough. Instead, comparisons sometimes need to retrace steps, and rather than a linear iteration, at the end we have tree-like structures. Identifying these crucial differences allowed us to develop inner model theory beyond this point. This in turn has led to many results and to the identification of deep connections between large cardinals and descriptive set theory. Literally, thanks to the presence of Woodin cardinals, the set-theoretic landscape grew and transformed significantly.



                    Third, Woodinness also turns out to be the notion needed to carry out certain forcing constructions. Some consistency results that were not expected now could be established, thanks to the identification of genuinely new forcing notions that use the Woodin cardinals in an essential way. Other constructions that were known from significant large cardinals were improved to their optimal form.



                    The story of how the notion of Woodinness was identified is actually quite nice. Kunen had used huge cardinals to prove the consistency of the existence of saturated ideals on $omega_1$. The embeddings witnessing hugeness end up lifting to the generic embeddings witnessing saturation in the appropriate forcing extension. In 1984, Foreman, Magidor and Shelah identified an entirely new way of finding models with saturated ideals. Their construction improved the large cardinal notion needed from hugeness to supercompactness. What is significant is that the generic embedding is no longer a lifting of an old genuine embedding. Indeed, their construction preserves $omega_1$. As a consequence of their results, in May of the same year, Woodin showed that supercompactness implies that all projective sets of reals are Lebesgue measurable (and more). Conversations between Shelah and Woodin quickly led to the realization that much weaker cardinals than supercompact sufficed for this result. Through a series of refinements, the notions of what we now call Shelah and Woodin cardinals were identified, with the latter being of precisely the right strength: all this happened while developments in determinacy and inner model theory showed the deep connections between these fields, and the pivotal role that Woodin cardinals played in this connection. This all happened quite quickly: by the time the Shelah-Woodin paper appeared in print, the importance of Woodin cardinals was already recognized.




                    MR1074499 (92m:03087). Shelah, Saharon; Woodin, Hugh. Large cardinals imply that every reasonably definable set of reals is Lebesgue measurable. Israel J. Math. 70 (1990), no. 3, 381–394.








                    share|cite|improve this answer














                    share|cite|improve this answer



                    share|cite|improve this answer








                    edited Dec 3 '18 at 16:21


























                    community wiki





                    2 revs
                    Andrés E. Caicedo













                    • $begingroup$
                      Those were some exciting times in set theory!
                      $endgroup$
                      – Asaf Karagila
                      Dec 4 '18 at 15:37


















                    • $begingroup$
                      Those were some exciting times in set theory!
                      $endgroup$
                      – Asaf Karagila
                      Dec 4 '18 at 15:37
















                    $begingroup$
                    Those were some exciting times in set theory!
                    $endgroup$
                    – Asaf Karagila
                    Dec 4 '18 at 15:37




                    $begingroup$
                    Those were some exciting times in set theory!
                    $endgroup$
                    – Asaf Karagila
                    Dec 4 '18 at 15:37











                    19












                    $begingroup$

                    I would nominate the definition of probability theory in terms of measure theory, where a probability space is an abstract measure space, an event is a measurable subset, a random variable is a measurable function, and expectation is the Lebesgue integral. This is usually credited to a 1933 paper by Kolmogorov, though it seems that many of the ideas had previously been around. Here is a very interesting survey by Shafer and Vovk, entitled "The origins and legacy of Kolmogorov's Grundbegriffe."



                    People had been thinking about probability for a long time before that, but these definitions made it possible to place everything on a rigorous footing and exploit many key results from measure theory. Shafer and Vovk say that it took probability from a mathematical "pastime" to become a respected branch of pure mathematics.



                    (I am no expert historian so please feel free to edit with corrections, more background, further discussion, etc.)






                    share|cite|improve this answer











                    $endgroup$













                    • $begingroup$
                      From the words of my probability professor: "there are only three letters in probability, $Omega$, $mathcal{A}$, and $mathbb{P}$".
                      $endgroup$
                      – Najib Idrissi
                      Dec 4 '18 at 17:59










                    • $begingroup$
                      @NajibIdrissi : There's also $X$ as in $X:Omegatomathbb R,$ and $operatorname E$ as in $operatorname E X = int_Omega X, dP. qquad$
                      $endgroup$
                      – Michael Hardy
                      Dec 4 '18 at 23:01






                    • 4




                      $begingroup$
                      I am convinced that Kolmogorov was not God's last prophet, and his definitions need to be kept in a certain limited context while others advance beyond them.
                      $endgroup$
                      – Michael Hardy
                      Dec 4 '18 at 23:02










                    • $begingroup$
                      "At a purely formal level, one could call probability theory the study of measure spaces with total measure one, but that would be like calling number theory the study of strings of digits which terminate." (Terry Tao)
                      $endgroup$
                      – Michael
                      Dec 7 '18 at 16:46










                    • $begingroup$
                      @Michael: Following that analogy, Kolmogorov's axiomatization is like the invention of decimal notation. It isn't necessarily the only way to think about numbers, and it's possible to focus too much on the formalism and lose sight of the bigger ideas, but nonetheless it's a powerful tool and a major advance.
                      $endgroup$
                      – Nate Eldredge
                      Dec 7 '18 at 21:03
















                    19












                    $begingroup$

                    I would nominate the definition of probability theory in terms of measure theory, where a probability space is an abstract measure space, an event is a measurable subset, a random variable is a measurable function, and expectation is the Lebesgue integral. This is usually credited to a 1933 paper by Kolmogorov, though it seems that many of the ideas had previously been around. Here is a very interesting survey by Shafer and Vovk, entitled "The origins and legacy of Kolmogorov's Grundbegriffe."



                    People had been thinking about probability for a long time before that, but these definitions made it possible to place everything on a rigorous footing and exploit many key results from measure theory. Shafer and Vovk say that it took probability from a mathematical "pastime" to become a respected branch of pure mathematics.



                    (I am no expert historian so please feel free to edit with corrections, more background, further discussion, etc.)






                    share|cite|improve this answer











                    $endgroup$













                    • $begingroup$
                      From the words of my probability professor: "there are only three letters in probability, $Omega$, $mathcal{A}$, and $mathbb{P}$".
                      $endgroup$
                      – Najib Idrissi
                      Dec 4 '18 at 17:59










                    • $begingroup$
                      @NajibIdrissi : There's also $X$ as in $X:Omegatomathbb R,$ and $operatorname E$ as in $operatorname E X = int_Omega X, dP. qquad$
                      $endgroup$
                      – Michael Hardy
                      Dec 4 '18 at 23:01






                    • 4




                      $begingroup$
                      I am convinced that Kolmogorov was not God's last prophet, and his definitions need to be kept in a certain limited context while others advance beyond them.
                      $endgroup$
                      – Michael Hardy
                      Dec 4 '18 at 23:02










                    • $begingroup$
                      "At a purely formal level, one could call probability theory the study of measure spaces with total measure one, but that would be like calling number theory the study of strings of digits which terminate." (Terry Tao)
                      $endgroup$
                      – Michael
                      Dec 7 '18 at 16:46










                    • $begingroup$
                      @Michael: Following that analogy, Kolmogorov's axiomatization is like the invention of decimal notation. It isn't necessarily the only way to think about numbers, and it's possible to focus too much on the formalism and lose sight of the bigger ideas, but nonetheless it's a powerful tool and a major advance.
                      $endgroup$
                      – Nate Eldredge
                      Dec 7 '18 at 21:03














                    19












                    19








                    19





                    $begingroup$

                    I would nominate the definition of probability theory in terms of measure theory, where a probability space is an abstract measure space, an event is a measurable subset, a random variable is a measurable function, and expectation is the Lebesgue integral. This is usually credited to a 1933 paper by Kolmogorov, though it seems that many of the ideas had previously been around. Here is a very interesting survey by Shafer and Vovk, entitled "The origins and legacy of Kolmogorov's Grundbegriffe."



                    People had been thinking about probability for a long time before that, but these definitions made it possible to place everything on a rigorous footing and exploit many key results from measure theory. Shafer and Vovk say that it took probability from a mathematical "pastime" to become a respected branch of pure mathematics.



                    (I am no expert historian so please feel free to edit with corrections, more background, further discussion, etc.)






                    share|cite|improve this answer











                    $endgroup$



                    I would nominate the definition of probability theory in terms of measure theory, where a probability space is an abstract measure space, an event is a measurable subset, a random variable is a measurable function, and expectation is the Lebesgue integral. This is usually credited to a 1933 paper by Kolmogorov, though it seems that many of the ideas had previously been around. Here is a very interesting survey by Shafer and Vovk, entitled "The origins and legacy of Kolmogorov's Grundbegriffe."



                    People had been thinking about probability for a long time before that, but these definitions made it possible to place everything on a rigorous footing and exploit many key results from measure theory. Shafer and Vovk say that it took probability from a mathematical "pastime" to become a respected branch of pure mathematics.



                    (I am no expert historian so please feel free to edit with corrections, more background, further discussion, etc.)







                    share|cite|improve this answer














                    share|cite|improve this answer



                    share|cite|improve this answer








                    answered Dec 4 '18 at 3:39


























                    community wiki





                    Nate Eldredge













                    • $begingroup$
                      From the words of my probability professor: "there are only three letters in probability, $Omega$, $mathcal{A}$, and $mathbb{P}$".
                      $endgroup$
                      – Najib Idrissi
                      Dec 4 '18 at 17:59










                    • $begingroup$
                      @NajibIdrissi : There's also $X$ as in $X:Omegatomathbb R,$ and $operatorname E$ as in $operatorname E X = int_Omega X, dP. qquad$
                      $endgroup$
                      – Michael Hardy
                      Dec 4 '18 at 23:01






                    • 4




                      $begingroup$
                      I am convinced that Kolmogorov was not God's last prophet, and his definitions need to be kept in a certain limited context while others advance beyond them.
                      $endgroup$
                      – Michael Hardy
                      Dec 4 '18 at 23:02










                    • $begingroup$
                      "At a purely formal level, one could call probability theory the study of measure spaces with total measure one, but that would be like calling number theory the study of strings of digits which terminate." (Terry Tao)
                      $endgroup$
                      – Michael
                      Dec 7 '18 at 16:46










                    • $begingroup$
                      @Michael: Following that analogy, Kolmogorov's axiomatization is like the invention of decimal notation. It isn't necessarily the only way to think about numbers, and it's possible to focus too much on the formalism and lose sight of the bigger ideas, but nonetheless it's a powerful tool and a major advance.
                      $endgroup$
                      – Nate Eldredge
                      Dec 7 '18 at 21:03


















                    • $begingroup$
                      From the words of my probability professor: "there are only three letters in probability, $Omega$, $mathcal{A}$, and $mathbb{P}$".
                      $endgroup$
                      – Najib Idrissi
                      Dec 4 '18 at 17:59










                    • $begingroup$
                      @NajibIdrissi : There's also $X$ as in $X:Omegatomathbb R,$ and $operatorname E$ as in $operatorname E X = int_Omega X, dP. qquad$
                      $endgroup$
                      – Michael Hardy
                      Dec 4 '18 at 23:01






                    • 4




                      $begingroup$
                      I am convinced that Kolmogorov was not God's last prophet, and his definitions need to be kept in a certain limited context while others advance beyond them.
                      $endgroup$
                      – Michael Hardy
                      Dec 4 '18 at 23:02










                    • $begingroup$
                      "At a purely formal level, one could call probability theory the study of measure spaces with total measure one, but that would be like calling number theory the study of strings of digits which terminate." (Terry Tao)
                      $endgroup$
                      – Michael
                      Dec 7 '18 at 16:46










                    • $begingroup$
                      @Michael: Following that analogy, Kolmogorov's axiomatization is like the invention of decimal notation. It isn't necessarily the only way to think about numbers, and it's possible to focus too much on the formalism and lose sight of the bigger ideas, but nonetheless it's a powerful tool and a major advance.
                      $endgroup$
                      – Nate Eldredge
                      Dec 7 '18 at 21:03
















                    $begingroup$
                    From the words of my probability professor: "there are only three letters in probability, $Omega$, $mathcal{A}$, and $mathbb{P}$".
                    $endgroup$
                    – Najib Idrissi
                    Dec 4 '18 at 17:59




                    $begingroup$
                    From the words of my probability professor: "there are only three letters in probability, $Omega$, $mathcal{A}$, and $mathbb{P}$".
                    $endgroup$
                    – Najib Idrissi
                    Dec 4 '18 at 17:59












                    $begingroup$
                    @NajibIdrissi : There's also $X$ as in $X:Omegatomathbb R,$ and $operatorname E$ as in $operatorname E X = int_Omega X, dP. qquad$
                    $endgroup$
                    – Michael Hardy
                    Dec 4 '18 at 23:01




                    $begingroup$
                    @NajibIdrissi : There's also $X$ as in $X:Omegatomathbb R,$ and $operatorname E$ as in $operatorname E X = int_Omega X, dP. qquad$
                    $endgroup$
                    – Michael Hardy
                    Dec 4 '18 at 23:01




                    4




                    4




                    $begingroup$
                    I am convinced that Kolmogorov was not God's last prophet, and his definitions need to be kept in a certain limited context while others advance beyond them.
                    $endgroup$
                    – Michael Hardy
                    Dec 4 '18 at 23:02




                    $begingroup$
                    I am convinced that Kolmogorov was not God's last prophet, and his definitions need to be kept in a certain limited context while others advance beyond them.
                    $endgroup$
                    – Michael Hardy
                    Dec 4 '18 at 23:02












                    $begingroup$
                    "At a purely formal level, one could call probability theory the study of measure spaces with total measure one, but that would be like calling number theory the study of strings of digits which terminate." (Terry Tao)
                    $endgroup$
                    – Michael
                    Dec 7 '18 at 16:46




                    $begingroup$
                    "At a purely formal level, one could call probability theory the study of measure spaces with total measure one, but that would be like calling number theory the study of strings of digits which terminate." (Terry Tao)
                    $endgroup$
                    – Michael
                    Dec 7 '18 at 16:46












                    $begingroup$
                    @Michael: Following that analogy, Kolmogorov's axiomatization is like the invention of decimal notation. It isn't necessarily the only way to think about numbers, and it's possible to focus too much on the formalism and lose sight of the bigger ideas, but nonetheless it's a powerful tool and a major advance.
                    $endgroup$
                    – Nate Eldredge
                    Dec 7 '18 at 21:03




                    $begingroup$
                    @Michael: Following that analogy, Kolmogorov's axiomatization is like the invention of decimal notation. It isn't necessarily the only way to think about numbers, and it's possible to focus too much on the formalism and lose sight of the bigger ideas, but nonetheless it's a powerful tool and a major advance.
                    $endgroup$
                    – Nate Eldredge
                    Dec 7 '18 at 21:03











                    19












                    $begingroup$

                    My nominee is the definition of a topology. I don't know the history of the subject but my impression is that this was the result of a collective effort. The definition with collections of subsets closed under finite intersections and arbitrary unions is not the kind of thing one would get up one morning and decide to consider. It seems to me this was an example of category where the definition of morphisms was clear (continuous maps) but where finding the right notion of object was not obvious.



                    Perhaps a sign of a good definition is that it leads to other good definitions. Those could arise in order to remedy some flaws of the original one, like Grothendieck's topos. They could also arise as particular cases of the original definition like Schwartz's definition of the topology on $mathcal{D}$ underlying the notion of distribution mentioned in Alexandre's answer.






                    share|cite|improve this answer











                    $endgroup$


















                      19












                      $begingroup$

                      My nominee is the definition of a topology. I don't know the history of the subject but my impression is that this was the result of a collective effort. The definition with collections of subsets closed under finite intersections and arbitrary unions is not the kind of thing one would get up one morning and decide to consider. It seems to me this was an example of category where the definition of morphisms was clear (continuous maps) but where finding the right notion of object was not obvious.



                      Perhaps a sign of a good definition is that it leads to other good definitions. Those could arise in order to remedy some flaws of the original one, like Grothendieck's topos. They could also arise as particular cases of the original definition like Schwartz's definition of the topology on $mathcal{D}$ underlying the notion of distribution mentioned in Alexandre's answer.






                      share|cite|improve this answer











                      $endgroup$
















                        19












                        19








                        19





                        $begingroup$

                        My nominee is the definition of a topology. I don't know the history of the subject but my impression is that this was the result of a collective effort. The definition with collections of subsets closed under finite intersections and arbitrary unions is not the kind of thing one would get up one morning and decide to consider. It seems to me this was an example of category where the definition of morphisms was clear (continuous maps) but where finding the right notion of object was not obvious.



                        Perhaps a sign of a good definition is that it leads to other good definitions. Those could arise in order to remedy some flaws of the original one, like Grothendieck's topos. They could also arise as particular cases of the original definition like Schwartz's definition of the topology on $mathcal{D}$ underlying the notion of distribution mentioned in Alexandre's answer.






                        share|cite|improve this answer











                        $endgroup$



                        My nominee is the definition of a topology. I don't know the history of the subject but my impression is that this was the result of a collective effort. The definition with collections of subsets closed under finite intersections and arbitrary unions is not the kind of thing one would get up one morning and decide to consider. It seems to me this was an example of category where the definition of morphisms was clear (continuous maps) but where finding the right notion of object was not obvious.



                        Perhaps a sign of a good definition is that it leads to other good definitions. Those could arise in order to remedy some flaws of the original one, like Grothendieck's topos. They could also arise as particular cases of the original definition like Schwartz's definition of the topology on $mathcal{D}$ underlying the notion of distribution mentioned in Alexandre's answer.







                        share|cite|improve this answer














                        share|cite|improve this answer



                        share|cite|improve this answer








                        answered Dec 4 '18 at 15:39


























                        community wiki





                        Abdelmalek Abdesselam
























                            16












                            $begingroup$

                            In the comments I offered the "classical" example of Grothendieck's definition of a scheme. But for a more modern example: Fomin and Zelevinsky's definition of a cluster algebra is at first sight a rather ungainly thing, but turned out to be exactly what was needed to unify certain patterns that appeared in various hitherto unrelated areas of mathematics like discrete integrable systems, the representation theory of associative algebras, Poisson/symplectic geometry, etc.






                            share|cite|improve this answer











                            $endgroup$


















                              16












                              $begingroup$

                              In the comments I offered the "classical" example of Grothendieck's definition of a scheme. But for a more modern example: Fomin and Zelevinsky's definition of a cluster algebra is at first sight a rather ungainly thing, but turned out to be exactly what was needed to unify certain patterns that appeared in various hitherto unrelated areas of mathematics like discrete integrable systems, the representation theory of associative algebras, Poisson/symplectic geometry, etc.






                              share|cite|improve this answer











                              $endgroup$
















                                16












                                16








                                16





                                $begingroup$

                                In the comments I offered the "classical" example of Grothendieck's definition of a scheme. But for a more modern example: Fomin and Zelevinsky's definition of a cluster algebra is at first sight a rather ungainly thing, but turned out to be exactly what was needed to unify certain patterns that appeared in various hitherto unrelated areas of mathematics like discrete integrable systems, the representation theory of associative algebras, Poisson/symplectic geometry, etc.






                                share|cite|improve this answer











                                $endgroup$



                                In the comments I offered the "classical" example of Grothendieck's definition of a scheme. But for a more modern example: Fomin and Zelevinsky's definition of a cluster algebra is at first sight a rather ungainly thing, but turned out to be exactly what was needed to unify certain patterns that appeared in various hitherto unrelated areas of mathematics like discrete integrable systems, the representation theory of associative algebras, Poisson/symplectic geometry, etc.







                                share|cite|improve this answer














                                share|cite|improve this answer



                                share|cite|improve this answer








                                answered Dec 3 '18 at 16:27


























                                community wiki





                                Sam Hopkins
























                                    9












                                    $begingroup$


                                    My interest in this is mainly psychological, namely, how the action of naming something somehow brings it into existence and organizes the world around it.




                                    The power of naming...



                                    J.-M. Souriau wrote a book meant to take anyone from the crib to relativistic cosmology and quantum statistics — that he unfortunately (predictably?) never got to really finish, but the draft is online. He starts by saying that all babies are (of course) natural born mathematicians, and everyone’s “first and sensual” abstraction is when they name that recurring warm blob “mommy”.



                                    Surely that qualifies as “crucial to further understanding”; he argues that half math is more of the same, and tells stories illustrating the power of naming numbers, points, the unknown $(x)$, maps, variables, determinants, groups, morphisms, cohomologies... all would qualify as answers. It’s not just about coming up with epoch-making definitions, either: teaching students to define and name variables in their problems is also what we do.



                                    We’ve all seen quotes to the effect that it’s all in the definitions. “Success stories” in that are largely the history of math, or at least one way to look at it. Specific recent ones that likely inspired the book (and made it to the MSC) would be symplectic manifold (1950), symplectic geometry (1953; earlier it was this), and moment(um) map (1967).



                                    (I agree with others that this is a near-duplicate, and will transfer my answer there if needed.)






                                    share|cite|improve this answer











                                    $endgroup$









                                    • 2




                                      $begingroup$
                                      I've heard Noam Chomsky theorize in the reverse direction: That the need to count inductively helped evolve recursive brain structures that were then repurposed for language.
                                      $endgroup$
                                      – Daniel R. Collins
                                      Dec 4 '18 at 5:41






                                    • 1




                                      $begingroup$
                                      This is perhaps restating Heraclitus Logos, a concept considered so important in its day it was made the opening passage of the Christian Bible.
                                      $endgroup$
                                      – user334732
                                      Dec 5 '18 at 5:49






                                    • 2




                                      $begingroup$
                                      One might also reflect upon the critical role of words in annotating things, which is itself integral to understanding. Since once a concept is annotated we can begin to annotate its properties and then no longer need to hold the full concept in our mind, rather the salient parts thereof, simplifying the process of thinking about the same and making progress possible where the degree of complexity would otherwise be too great.
                                      $endgroup$
                                      – user334732
                                      Dec 5 '18 at 6:02


















                                    9












                                    $begingroup$


                                    My interest in this is mainly psychological, namely, how the action of naming something somehow brings it into existence and organizes the world around it.




                                    The power of naming...



                                    J.-M. Souriau wrote a book meant to take anyone from the crib to relativistic cosmology and quantum statistics — that he unfortunately (predictably?) never got to really finish, but the draft is online. He starts by saying that all babies are (of course) natural born mathematicians, and everyone’s “first and sensual” abstraction is when they name that recurring warm blob “mommy”.



                                    Surely that qualifies as “crucial to further understanding”; he argues that half math is more of the same, and tells stories illustrating the power of naming numbers, points, the unknown $(x)$, maps, variables, determinants, groups, morphisms, cohomologies... all would qualify as answers. It’s not just about coming up with epoch-making definitions, either: teaching students to define and name variables in their problems is also what we do.



                                    We’ve all seen quotes to the effect that it’s all in the definitions. “Success stories” in that are largely the history of math, or at least one way to look at it. Specific recent ones that likely inspired the book (and made it to the MSC) would be symplectic manifold (1950), symplectic geometry (1953; earlier it was this), and moment(um) map (1967).



                                    (I agree with others that this is a near-duplicate, and will transfer my answer there if needed.)






                                    share|cite|improve this answer











                                    $endgroup$









                                    • 2




                                      $begingroup$
                                      I've heard Noam Chomsky theorize in the reverse direction: That the need to count inductively helped evolve recursive brain structures that were then repurposed for language.
                                      $endgroup$
                                      – Daniel R. Collins
                                      Dec 4 '18 at 5:41






                                    • 1




                                      $begingroup$
                                      This is perhaps restating Heraclitus Logos, a concept considered so important in its day it was made the opening passage of the Christian Bible.
                                      $endgroup$
                                      – user334732
                                      Dec 5 '18 at 5:49






                                    • 2




                                      $begingroup$
                                      One might also reflect upon the critical role of words in annotating things, which is itself integral to understanding. Since once a concept is annotated we can begin to annotate its properties and then no longer need to hold the full concept in our mind, rather the salient parts thereof, simplifying the process of thinking about the same and making progress possible where the degree of complexity would otherwise be too great.
                                      $endgroup$
                                      – user334732
                                      Dec 5 '18 at 6:02
















                                    9












                                    9








                                    9





                                    $begingroup$


                                    My interest in this is mainly psychological, namely, how the action of naming something somehow brings it into existence and organizes the world around it.




                                    The power of naming...



                                    J.-M. Souriau wrote a book meant to take anyone from the crib to relativistic cosmology and quantum statistics — that he unfortunately (predictably?) never got to really finish, but the draft is online. He starts by saying that all babies are (of course) natural born mathematicians, and everyone’s “first and sensual” abstraction is when they name that recurring warm blob “mommy”.



                                    Surely that qualifies as “crucial to further understanding”; he argues that half math is more of the same, and tells stories illustrating the power of naming numbers, points, the unknown $(x)$, maps, variables, determinants, groups, morphisms, cohomologies... all would qualify as answers. It’s not just about coming up with epoch-making definitions, either: teaching students to define and name variables in their problems is also what we do.



                                    We’ve all seen quotes to the effect that it’s all in the definitions. “Success stories” in that are largely the history of math, or at least one way to look at it. Specific recent ones that likely inspired the book (and made it to the MSC) would be symplectic manifold (1950), symplectic geometry (1953; earlier it was this), and moment(um) map (1967).



                                    (I agree with others that this is a near-duplicate, and will transfer my answer there if needed.)






                                    share|cite|improve this answer











                                    $endgroup$




                                    My interest in this is mainly psychological, namely, how the action of naming something somehow brings it into existence and organizes the world around it.




                                    The power of naming...



                                    J.-M. Souriau wrote a book meant to take anyone from the crib to relativistic cosmology and quantum statistics — that he unfortunately (predictably?) never got to really finish, but the draft is online. He starts by saying that all babies are (of course) natural born mathematicians, and everyone’s “first and sensual” abstraction is when they name that recurring warm blob “mommy”.



                                    Surely that qualifies as “crucial to further understanding”; he argues that half math is more of the same, and tells stories illustrating the power of naming numbers, points, the unknown $(x)$, maps, variables, determinants, groups, morphisms, cohomologies... all would qualify as answers. It’s not just about coming up with epoch-making definitions, either: teaching students to define and name variables in their problems is also what we do.



                                    We’ve all seen quotes to the effect that it’s all in the definitions. “Success stories” in that are largely the history of math, or at least one way to look at it. Specific recent ones that likely inspired the book (and made it to the MSC) would be symplectic manifold (1950), symplectic geometry (1953; earlier it was this), and moment(um) map (1967).



                                    (I agree with others that this is a near-duplicate, and will transfer my answer there if needed.)







                                    share|cite|improve this answer














                                    share|cite|improve this answer



                                    share|cite|improve this answer








                                    edited Dec 3 '18 at 22:29


























                                    community wiki





                                    4 revs
                                    Francois Ziegler









                                    • 2




                                      $begingroup$
                                      I've heard Noam Chomsky theorize in the reverse direction: That the need to count inductively helped evolve recursive brain structures that were then repurposed for language.
                                      $endgroup$
                                      – Daniel R. Collins
                                      Dec 4 '18 at 5:41






                                    • 1




                                      $begingroup$
                                      This is perhaps restating Heraclitus Logos, a concept considered so important in its day it was made the opening passage of the Christian Bible.
                                      $endgroup$
                                      – user334732
                                      Dec 5 '18 at 5:49






                                    • 2




                                      $begingroup$
                                      One might also reflect upon the critical role of words in annotating things, which is itself integral to understanding. Since once a concept is annotated we can begin to annotate its properties and then no longer need to hold the full concept in our mind, rather the salient parts thereof, simplifying the process of thinking about the same and making progress possible where the degree of complexity would otherwise be too great.
                                      $endgroup$
                                      – user334732
                                      Dec 5 '18 at 6:02
















                                    • 2




                                      $begingroup$
                                      I've heard Noam Chomsky theorize in the reverse direction: That the need to count inductively helped evolve recursive brain structures that were then repurposed for language.
                                      $endgroup$
                                      – Daniel R. Collins
                                      Dec 4 '18 at 5:41






                                    • 1




                                      $begingroup$
                                      This is perhaps restating Heraclitus Logos, a concept considered so important in its day it was made the opening passage of the Christian Bible.
                                      $endgroup$
                                      – user334732
                                      Dec 5 '18 at 5:49






                                    • 2




                                      $begingroup$
                                      One might also reflect upon the critical role of words in annotating things, which is itself integral to understanding. Since once a concept is annotated we can begin to annotate its properties and then no longer need to hold the full concept in our mind, rather the salient parts thereof, simplifying the process of thinking about the same and making progress possible where the degree of complexity would otherwise be too great.
                                      $endgroup$
                                      – user334732
                                      Dec 5 '18 at 6:02










                                    2




                                    2




                                    $begingroup$
                                    I've heard Noam Chomsky theorize in the reverse direction: That the need to count inductively helped evolve recursive brain structures that were then repurposed for language.
                                    $endgroup$
                                    – Daniel R. Collins
                                    Dec 4 '18 at 5:41




                                    $begingroup$
                                    I've heard Noam Chomsky theorize in the reverse direction: That the need to count inductively helped evolve recursive brain structures that were then repurposed for language.
                                    $endgroup$
                                    – Daniel R. Collins
                                    Dec 4 '18 at 5:41




                                    1




                                    1




                                    $begingroup$
                                    This is perhaps restating Heraclitus Logos, a concept considered so important in its day it was made the opening passage of the Christian Bible.
                                    $endgroup$
                                    – user334732
                                    Dec 5 '18 at 5:49




                                    $begingroup$
                                    This is perhaps restating Heraclitus Logos, a concept considered so important in its day it was made the opening passage of the Christian Bible.
                                    $endgroup$
                                    – user334732
                                    Dec 5 '18 at 5:49




                                    2




                                    2




                                    $begingroup$
                                    One might also reflect upon the critical role of words in annotating things, which is itself integral to understanding. Since once a concept is annotated we can begin to annotate its properties and then no longer need to hold the full concept in our mind, rather the salient parts thereof, simplifying the process of thinking about the same and making progress possible where the degree of complexity would otherwise be too great.
                                    $endgroup$
                                    – user334732
                                    Dec 5 '18 at 6:02






                                    $begingroup$
                                    One might also reflect upon the critical role of words in annotating things, which is itself integral to understanding. Since once a concept is annotated we can begin to annotate its properties and then no longer need to hold the full concept in our mind, rather the salient parts thereof, simplifying the process of thinking about the same and making progress possible where the degree of complexity would otherwise be too great.
                                    $endgroup$
                                    – user334732
                                    Dec 5 '18 at 6:02













                                    5












                                    $begingroup$

                                    Well, this is basically the same answer as the one by Alexandre Eremenko, but here goes: The particular form of the definition of derivative is crucial for partial differential equations. Using weaker notions than classical derivatives leads to a whole new theory of solutions. While this is (part of) what the answer about distributions is about, the particular case of weak derivatives due to Sobolev predates distributions and is of prime importance to the field of PDEs. Also, weak derivatives allow for a rich theory of function spaces (Banach and Hilbert ones).






                                    share|cite|improve this answer











                                    $endgroup$


















                                      5












                                      $begingroup$

                                      Well, this is basically the same answer as the one by Alexandre Eremenko, but here goes: The particular form of the definition of derivative is crucial for partial differential equations. Using weaker notions than classical derivatives leads to a whole new theory of solutions. While this is (part of) what the answer about distributions is about, the particular case of weak derivatives due to Sobolev predates distributions and is of prime importance to the field of PDEs. Also, weak derivatives allow for a rich theory of function spaces (Banach and Hilbert ones).






                                      share|cite|improve this answer











                                      $endgroup$
















                                        5












                                        5








                                        5





                                        $begingroup$

                                        Well, this is basically the same answer as the one by Alexandre Eremenko, but here goes: The particular form of the definition of derivative is crucial for partial differential equations. Using weaker notions than classical derivatives leads to a whole new theory of solutions. While this is (part of) what the answer about distributions is about, the particular case of weak derivatives due to Sobolev predates distributions and is of prime importance to the field of PDEs. Also, weak derivatives allow for a rich theory of function spaces (Banach and Hilbert ones).






                                        share|cite|improve this answer











                                        $endgroup$



                                        Well, this is basically the same answer as the one by Alexandre Eremenko, but here goes: The particular form of the definition of derivative is crucial for partial differential equations. Using weaker notions than classical derivatives leads to a whole new theory of solutions. While this is (part of) what the answer about distributions is about, the particular case of weak derivatives due to Sobolev predates distributions and is of prime importance to the field of PDEs. Also, weak derivatives allow for a rich theory of function spaces (Banach and Hilbert ones).







                                        share|cite|improve this answer














                                        share|cite|improve this answer



                                        share|cite|improve this answer








                                        answered Dec 3 '18 at 15:07


























                                        community wiki





                                        Dirk
























                                            4












                                            $begingroup$

                                            I want to add the notion of Tree-Width to the list.



                                            Roughly speaking, tree-width is a measure of how less tree-like a graph is. A tree has tree-width 1 while a $k$-clique has tree-width $k$. Tree-width has been used as a parameter for the analysis of a number of graph algorithms.



                                            I think like every other clever definition, the definition of tree-width is something that makes a lot of connections clearer once we internalize it and yet, the definition is not something that is easy to come up with.



                                            For reference,




                                            A tree decomposition of a graph $G = (V, E)$ is a tree, $T$, with nodes
                                            $X_1$, ..., $X_n$, where each $X_i$ is a subset of $V$, satisfying the following
                                            properties (the term node is used to refer to a vertex of $T$ to avoid
                                            confusion with vertices of $G$):




                                            1. The union of all sets $X_i$ equals $V$. That is, each graph vertex is contained in at least one tree node.


                                            2. If $X_i$ and $X_j$ both contain a vertex $v$, then all nodes $X_k$ of $T$ in the (unique) path between $X_i$ and $X_j$ contain $v$ as well. Equivalently, the
                                              tree nodes containing vertex $v$ form a connected subtree of $T$.


                                            3. For every edge $(v, w)$ in the graph, there is a subset $X_i$ that contains both $v$ and $w$. That is, vertices are adjacent in the graph
                                              only when the corresponding subtrees have a node in common.


                                            The width of a tree decomposition is the size of its largest set $X_i$
                                            minus one. The treewidth $mathrm{tw}(G)$ of a graph $G$ is the minimum width among
                                            all possible tree decompositions of $G$.







                                            share|cite|improve this answer











                                            $endgroup$


















                                              4












                                              $begingroup$

                                              I want to add the notion of Tree-Width to the list.



                                              Roughly speaking, tree-width is a measure of how less tree-like a graph is. A tree has tree-width 1 while a $k$-clique has tree-width $k$. Tree-width has been used as a parameter for the analysis of a number of graph algorithms.



                                              I think like every other clever definition, the definition of tree-width is something that makes a lot of connections clearer once we internalize it and yet, the definition is not something that is easy to come up with.



                                              For reference,




                                              A tree decomposition of a graph $G = (V, E)$ is a tree, $T$, with nodes
                                              $X_1$, ..., $X_n$, where each $X_i$ is a subset of $V$, satisfying the following
                                              properties (the term node is used to refer to a vertex of $T$ to avoid
                                              confusion with vertices of $G$):




                                              1. The union of all sets $X_i$ equals $V$. That is, each graph vertex is contained in at least one tree node.


                                              2. If $X_i$ and $X_j$ both contain a vertex $v$, then all nodes $X_k$ of $T$ in the (unique) path between $X_i$ and $X_j$ contain $v$ as well. Equivalently, the
                                                tree nodes containing vertex $v$ form a connected subtree of $T$.


                                              3. For every edge $(v, w)$ in the graph, there is a subset $X_i$ that contains both $v$ and $w$. That is, vertices are adjacent in the graph
                                                only when the corresponding subtrees have a node in common.


                                              The width of a tree decomposition is the size of its largest set $X_i$
                                              minus one. The treewidth $mathrm{tw}(G)$ of a graph $G$ is the minimum width among
                                              all possible tree decompositions of $G$.







                                              share|cite|improve this answer











                                              $endgroup$
















                                                4












                                                4








                                                4





                                                $begingroup$

                                                I want to add the notion of Tree-Width to the list.



                                                Roughly speaking, tree-width is a measure of how less tree-like a graph is. A tree has tree-width 1 while a $k$-clique has tree-width $k$. Tree-width has been used as a parameter for the analysis of a number of graph algorithms.



                                                I think like every other clever definition, the definition of tree-width is something that makes a lot of connections clearer once we internalize it and yet, the definition is not something that is easy to come up with.



                                                For reference,




                                                A tree decomposition of a graph $G = (V, E)$ is a tree, $T$, with nodes
                                                $X_1$, ..., $X_n$, where each $X_i$ is a subset of $V$, satisfying the following
                                                properties (the term node is used to refer to a vertex of $T$ to avoid
                                                confusion with vertices of $G$):




                                                1. The union of all sets $X_i$ equals $V$. That is, each graph vertex is contained in at least one tree node.


                                                2. If $X_i$ and $X_j$ both contain a vertex $v$, then all nodes $X_k$ of $T$ in the (unique) path between $X_i$ and $X_j$ contain $v$ as well. Equivalently, the
                                                  tree nodes containing vertex $v$ form a connected subtree of $T$.


                                                3. For every edge $(v, w)$ in the graph, there is a subset $X_i$ that contains both $v$ and $w$. That is, vertices are adjacent in the graph
                                                  only when the corresponding subtrees have a node in common.


                                                The width of a tree decomposition is the size of its largest set $X_i$
                                                minus one. The treewidth $mathrm{tw}(G)$ of a graph $G$ is the minimum width among
                                                all possible tree decompositions of $G$.







                                                share|cite|improve this answer











                                                $endgroup$



                                                I want to add the notion of Tree-Width to the list.



                                                Roughly speaking, tree-width is a measure of how less tree-like a graph is. A tree has tree-width 1 while a $k$-clique has tree-width $k$. Tree-width has been used as a parameter for the analysis of a number of graph algorithms.



                                                I think like every other clever definition, the definition of tree-width is something that makes a lot of connections clearer once we internalize it and yet, the definition is not something that is easy to come up with.



                                                For reference,




                                                A tree decomposition of a graph $G = (V, E)$ is a tree, $T$, with nodes
                                                $X_1$, ..., $X_n$, where each $X_i$ is a subset of $V$, satisfying the following
                                                properties (the term node is used to refer to a vertex of $T$ to avoid
                                                confusion with vertices of $G$):




                                                1. The union of all sets $X_i$ equals $V$. That is, each graph vertex is contained in at least one tree node.


                                                2. If $X_i$ and $X_j$ both contain a vertex $v$, then all nodes $X_k$ of $T$ in the (unique) path between $X_i$ and $X_j$ contain $v$ as well. Equivalently, the
                                                  tree nodes containing vertex $v$ form a connected subtree of $T$.


                                                3. For every edge $(v, w)$ in the graph, there is a subset $X_i$ that contains both $v$ and $w$. That is, vertices are adjacent in the graph
                                                  only when the corresponding subtrees have a node in common.


                                                The width of a tree decomposition is the size of its largest set $X_i$
                                                minus one. The treewidth $mathrm{tw}(G)$ of a graph $G$ is the minimum width among
                                                all possible tree decompositions of $G$.








                                                share|cite|improve this answer














                                                share|cite|improve this answer



                                                share|cite|improve this answer








                                                edited Dec 7 '18 at 10:55


























                                                community wiki





                                                2 revs
                                                Agnishom Chattopadhyay
























                                                    -3












                                                    $begingroup$

                                                    Euclid did not have real numbers. He could say that the length of one line segment goes into the length of another a certain number of times, or that the ratio of lengths of two segments was the same as that of two specified numbers ("numbers" were $2,3,4,5,...,$ and in particular $1$ was not a "number") or that the ratio of the length of the diagonal of a square to the side was not the ratio of any two numbers.



                                                    There is a much celebrated essay by James Wilkinson titled The Perfidious Polynomial that I have not brought myself to read beyond the first couple of pages because of the profound stupidity of what Wilkinson claims is the history of the development of number systems. He states with a straight face that negative numbers were introduced for the purpose of solving polynomial equations that had no positive solutions, that rational numbers were introduced after that for the purpose of solving yet more equations, that irrational numbers were then introduced for solving yet more, and finally complex numbers for the purpose of solving quadratic equations that lacked real solutions. Did Wilkinson never hear that Euclid proved the statement above about the ratio of diagonal to side of a square, but Euclid had never heard of negative numbers, and did not even have anything that we would consider a system of numbers that includes positive rational and irrational numbers? That imaginary numbers were used to find real solutions of third-degree polynomials? That negative numbers were introduced by accountants for their purposes? And that all of that obviously makes more sense and is more elegant and intellectually satisfying than the childish story he tells? In order to do what Wilkinson says was done, one would need certain definitions that were introduced only much later.






                                                    share|cite|improve this answer











                                                    $endgroup$













                                                    • $begingroup$
                                                      I quite like an (deliberately) ahistoric treatment of these number sets...
                                                      $endgroup$
                                                      – JP McCarthy
                                                      Dec 6 '18 at 13:07






                                                    • 7




                                                      $begingroup$
                                                      I don't think that a rant about an essay whose first two pages you don't like qualifies as an answer to the question.
                                                      $endgroup$
                                                      – Gerry Myerson
                                                      Dec 6 '18 at 21:10
















                                                    -3












                                                    $begingroup$

                                                    Euclid did not have real numbers. He could say that the length of one line segment goes into the length of another a certain number of times, or that the ratio of lengths of two segments was the same as that of two specified numbers ("numbers" were $2,3,4,5,...,$ and in particular $1$ was not a "number") or that the ratio of the length of the diagonal of a square to the side was not the ratio of any two numbers.



                                                    There is a much celebrated essay by James Wilkinson titled The Perfidious Polynomial that I have not brought myself to read beyond the first couple of pages because of the profound stupidity of what Wilkinson claims is the history of the development of number systems. He states with a straight face that negative numbers were introduced for the purpose of solving polynomial equations that had no positive solutions, that rational numbers were introduced after that for the purpose of solving yet more equations, that irrational numbers were then introduced for solving yet more, and finally complex numbers for the purpose of solving quadratic equations that lacked real solutions. Did Wilkinson never hear that Euclid proved the statement above about the ratio of diagonal to side of a square, but Euclid had never heard of negative numbers, and did not even have anything that we would consider a system of numbers that includes positive rational and irrational numbers? That imaginary numbers were used to find real solutions of third-degree polynomials? That negative numbers were introduced by accountants for their purposes? And that all of that obviously makes more sense and is more elegant and intellectually satisfying than the childish story he tells? In order to do what Wilkinson says was done, one would need certain definitions that were introduced only much later.






                                                    share|cite|improve this answer











                                                    $endgroup$













                                                    • $begingroup$
                                                      I quite like an (deliberately) ahistoric treatment of these number sets...
                                                      $endgroup$
                                                      – JP McCarthy
                                                      Dec 6 '18 at 13:07






                                                    • 7




                                                      $begingroup$
                                                      I don't think that a rant about an essay whose first two pages you don't like qualifies as an answer to the question.
                                                      $endgroup$
                                                      – Gerry Myerson
                                                      Dec 6 '18 at 21:10














                                                    -3












                                                    -3








                                                    -3





                                                    $begingroup$

                                                    Euclid did not have real numbers. He could say that the length of one line segment goes into the length of another a certain number of times, or that the ratio of lengths of two segments was the same as that of two specified numbers ("numbers" were $2,3,4,5,...,$ and in particular $1$ was not a "number") or that the ratio of the length of the diagonal of a square to the side was not the ratio of any two numbers.



                                                    There is a much celebrated essay by James Wilkinson titled The Perfidious Polynomial that I have not brought myself to read beyond the first couple of pages because of the profound stupidity of what Wilkinson claims is the history of the development of number systems. He states with a straight face that negative numbers were introduced for the purpose of solving polynomial equations that had no positive solutions, that rational numbers were introduced after that for the purpose of solving yet more equations, that irrational numbers were then introduced for solving yet more, and finally complex numbers for the purpose of solving quadratic equations that lacked real solutions. Did Wilkinson never hear that Euclid proved the statement above about the ratio of diagonal to side of a square, but Euclid had never heard of negative numbers, and did not even have anything that we would consider a system of numbers that includes positive rational and irrational numbers? That imaginary numbers were used to find real solutions of third-degree polynomials? That negative numbers were introduced by accountants for their purposes? And that all of that obviously makes more sense and is more elegant and intellectually satisfying than the childish story he tells? In order to do what Wilkinson says was done, one would need certain definitions that were introduced only much later.






                                                    share|cite|improve this answer











                                                    $endgroup$



                                                    Euclid did not have real numbers. He could say that the length of one line segment goes into the length of another a certain number of times, or that the ratio of lengths of two segments was the same as that of two specified numbers ("numbers" were $2,3,4,5,...,$ and in particular $1$ was not a "number") or that the ratio of the length of the diagonal of a square to the side was not the ratio of any two numbers.



                                                    There is a much celebrated essay by James Wilkinson titled The Perfidious Polynomial that I have not brought myself to read beyond the first couple of pages because of the profound stupidity of what Wilkinson claims is the history of the development of number systems. He states with a straight face that negative numbers were introduced for the purpose of solving polynomial equations that had no positive solutions, that rational numbers were introduced after that for the purpose of solving yet more equations, that irrational numbers were then introduced for solving yet more, and finally complex numbers for the purpose of solving quadratic equations that lacked real solutions. Did Wilkinson never hear that Euclid proved the statement above about the ratio of diagonal to side of a square, but Euclid had never heard of negative numbers, and did not even have anything that we would consider a system of numbers that includes positive rational and irrational numbers? That imaginary numbers were used to find real solutions of third-degree polynomials? That negative numbers were introduced by accountants for their purposes? And that all of that obviously makes more sense and is more elegant and intellectually satisfying than the childish story he tells? In order to do what Wilkinson says was done, one would need certain definitions that were introduced only much later.







                                                    share|cite|improve this answer














                                                    share|cite|improve this answer



                                                    share|cite|improve this answer








                                                    edited Dec 7 '18 at 18:37


























                                                    community wiki





                                                    3 revs, 2 users 57%
                                                    Michael Hardy













                                                    • $begingroup$
                                                      I quite like an (deliberately) ahistoric treatment of these number sets...
                                                      $endgroup$
                                                      – JP McCarthy
                                                      Dec 6 '18 at 13:07






                                                    • 7




                                                      $begingroup$
                                                      I don't think that a rant about an essay whose first two pages you don't like qualifies as an answer to the question.
                                                      $endgroup$
                                                      – Gerry Myerson
                                                      Dec 6 '18 at 21:10


















                                                    • $begingroup$
                                                      I quite like an (deliberately) ahistoric treatment of these number sets...
                                                      $endgroup$
                                                      – JP McCarthy
                                                      Dec 6 '18 at 13:07






                                                    • 7




                                                      $begingroup$
                                                      I don't think that a rant about an essay whose first two pages you don't like qualifies as an answer to the question.
                                                      $endgroup$
                                                      – Gerry Myerson
                                                      Dec 6 '18 at 21:10
















                                                    $begingroup$
                                                    I quite like an (deliberately) ahistoric treatment of these number sets...
                                                    $endgroup$
                                                    – JP McCarthy
                                                    Dec 6 '18 at 13:07




                                                    $begingroup$
                                                    I quite like an (deliberately) ahistoric treatment of these number sets...
                                                    $endgroup$
                                                    – JP McCarthy
                                                    Dec 6 '18 at 13:07




                                                    7




                                                    7




                                                    $begingroup$
                                                    I don't think that a rant about an essay whose first two pages you don't like qualifies as an answer to the question.
                                                    $endgroup$
                                                    – Gerry Myerson
                                                    Dec 6 '18 at 21:10




                                                    $begingroup$
                                                    I don't think that a rant about an essay whose first two pages you don't like qualifies as an answer to the question.
                                                    $endgroup$
                                                    – Gerry Myerson
                                                    Dec 6 '18 at 21:10


















                                                    draft saved

                                                    draft discarded




















































                                                    Thanks for contributing an answer to MathOverflow!


                                                    • Please be sure to answer the question. Provide details and share your research!

                                                    But avoid



                                                    • Asking for help, clarification, or responding to other answers.

                                                    • Making statements based on opinion; back them up with references or personal experience.


                                                    Use MathJax to format equations. MathJax reference.


                                                    To learn more, see our tips on writing great answers.




                                                    draft saved


                                                    draft discarded














                                                    StackExchange.ready(
                                                    function () {
                                                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathoverflow.net%2fquestions%2f316785%2fwhat-definitions-were-crucial-to-further-understanding%23new-answer', 'question_page');
                                                    }
                                                    );

                                                    Post as a guest















                                                    Required, but never shown





















































                                                    Required, but never shown














                                                    Required, but never shown












                                                    Required, but never shown







                                                    Required, but never shown

































                                                    Required, but never shown














                                                    Required, but never shown












                                                    Required, but never shown







                                                    Required, but never shown







                                                    Popular posts from this blog

                                                    Probability when a professor distributes a quiz and homework assignment to a class of n students.

                                                    Aardman Animations

                                                    Are they similar matrix