The Geometric Viewpoint

geometric and topological excursions by and for undergraduates

• Algorithms for Working with Triangulated Surfaces

Let’s imagine that you and your friends just took out a steaming pepperoni pizza out of the oven. How would you go about triangulating (divide using triangles) the surface (the pizza) into equal pieces? This shouldn’t be to hard, should it? But what if someone took a knife and placed a couple of points on the pizza, and said the triangulation has to contain triangles that have those points as vertices. How would unite the dots, making sure that the triangles you get yield the fairest division? Not so easy is it?

This is a problem that most graphics software engineers encounter. But their conundrum is even more complicated as they deal with millions of points located in 3D space, and the process they’re using has to be automated and work for every possible case. Because we will use this problem to guide us through the concepts presented in this post, we should first offer a clear description of the problem and of the solution we are looking for. In Computer Graphics, models (let’s say a human face) are made of thousand of patches that consist of a fixed number of points (usually numbers that are perfect squares). These points keep track of their position in 3D space (these are called world coordinates, as they lie in the coordinate system of the model). Because the computer programmer wants to create a realistic human face while making his program fast, he or she devises an algorithm that subdivides the patches into subpatches that have less than a certain distance in screen coordinates between adjacent points (i.e. the distance between two points on the screen will be smaller than an certain limit). This way, instead of having nice patches made out of 16, 25 or 36 points, we end up with all sorts of random configuration of points. And we have lots of such configurations. We are interested in finding an algorithm that, after the aforementioned subdivision, is able to create a triangulation of all the points in the model, while rendering an image as realistic as possible. By realistic, we understand that we don’t want our triangles to be too skinny, too big or too small, because that would create a deformed face (not that every face isn’t beautiful in its own way). Also, we are not allowed to have edges crossing each other (because that wouldn’t be a triangulation anymore), and we are not allowed to add new points.  To keep the problem simple, we will ignore things like aliasing and hidden surface removal, and we will also reduce our example to two dimensions. The presented method works in all dimensions, and the problem can actually be reduced to two dimensions by creating a bounding cube around the object and reflecting the points onto the closest face).

This is the kind of result we are looking for. Note that in this case, a different type of subdividing from the one we presented is used. The points are more dense in regions were the protuberances of the faces are more pronounced.

This is how we can reduce the problem from three dimensions to only two dimensions.

A mathematical structure that encompasses all of the above requirements is the Delaunay Triangulation. Before jumping into any algorithms, we shall first try to gain a mathematical understanding of Delaunay Triangulations. We can define Delaunay Trinagulations in two ways:

• Geometrically,  a Delaunay triangulation for a set P of points in a plane is a triangulation DT(P) such that no point in P is inside the circum-circle (the unique circle that passes through each of the triangle’s three vertices) of any triangle in DT(P) .
• Topologically, a Delaunay triangulation of a ﬁnite set ${{S}\subseteq \mathbb{R}^n}$ is isomorphic to the nerve of the Voronoi diagram of ${{S}}$.

As with most things in life, let us first ignore the harder definition and focus on the geometric one. The following image should make the definition clearer:

As you can see, the first picture is a Delaunay Triangulation because there are no points within any of the four circumcircles.

Because for our set ${{S} = \left\{ {A,B,C,D,E}\right\} }$ no points lie within the circumcirlces of triangles ${\Delta ABC,\Delta ACE,\Delta CED,\Delta BCD}$, then ${{DT(P)} = \left\{\Delta ABC,\Delta ACE,\Delta CED,\Delta BCD\right\}}$. Note that the triangulation of a set of points is a set of triangles. This definition is clear and easy, and it will be the one we will use when implementing our algorithm. However, why this specific triangulation works for our imposed conditions is not that apparent. If we take a look at the three triangulations above, we can observe that the Delaunay Triangulation is the most ‘aesthetically pleasing’ one, in the way that we don’t have long, slim triangles and that we only connect points that are close to each other. As it turns out, this ‘aesthetically pleasing’ property is given by the fact that Delaunay Triangulations maximize the minimal angle. In other words, the minimal angle of all the triangles in a Delaunay Triangulation of a set  ${{P}}$ , will be larger than the minimal angle of all the other triangulations of  ${{P}}$. Why does the property that no point of ${{P}}$ lies within the circumcircle of any triangle maximize the minimal angle? Pick a triangle in our triangulation and look at its least angle. Observe that as we move the vertex at that angle away from the center of the circle, two things happen: the circle increases in area along with the odds of another point of the set entering the circle, and the angle further decreases. So, because we are trying to form triangles with circumcircles that are as small as possible (ATTENTION: this is only an implication that is generally true, but it isn’t by any means a requirement, as Delunay triangulations don’t concern minimizing the edge length),we are ensuring that our triangulation connects points that are close and that we are avoiding ‘ugly’ triangles. This makes the Delaunay triangulation a perfect solution to our problem. Other properties of Delaunay Triangulations that we will take for granted are: every set of points in the Euclidean Space has a Delaunay Triangulation that is unique if no two points lie on the same sphere, the union of all the triangles form the convex hull of the set of points (i.e. the minimum convex set containing all the points in the set or a rubber band stretched using the points of the set so that it contains all of the points in the set in its interior).

Now let’s turn our attention towards the topological definition of a Delaunay Triangulation. At a first glance, the definition seems discouraging, but as we will set out to explain the terms, we will see that the sky isn’t that gray. The first word of the day is ‘isomorphic’, which for all intents and purposes means ‘the same’. The second math term is ‘nerve’. Lets look at the picture below and try to make sense of it:

The nerve(red) of a set of triangle (green)

Given a union of triangles ${{T_i}}$, we can form its nerve ${{N}}$ by following these steps. Start with  ${{N}}$ empty, and for each triangle in the union associate one point ${{t_i}}$ and add it to ${{N}}$. Then, look at non-empty intersections of the triangles(non-empty means that they share at least a point), and for every point shared by at least two triangles add a new set formed with the ${{t_i}}$‘s corresponding to the triangles that share that point. In this process you cannot repeat sets. If you correctly cover all the possible intersections, ${{N}}$ will be the nerve of the union of triangles. Observe that ${{N}}$ is a set containing sets of one or more points, sets which we can graphically represent through points, edges and polygons as seen in the picture above. The final unknown term is “Voronoi diagram”. Again, pictures comes in handy:

A nicely colored Voronoi Diagram of 20 points.

We form the Vornoi cell of a point P by taking the intersection of all the half-spaces that contain P.

The Voronoi diagram(the whole first picture) of a set of points(the black points) ${{S}\subseteq \mathbb{R}^n}$ is the collection of all the Voronoi cells(the colored polygons), denoted by ${{V_u}}$, where ${{u}}$ is a point of ${{S}}$ and ${{V_u} = \left\{x\in\mathbb{R}^n\mid \|x-u\| \le \|x-v\|,v\in\mathbb{R}^n\right\} }$. This definition tells us that every point in the cell ${{V_u}}$ is closer to ${{u}}$ than it is to any other point in ${{S}}$ and that the points on the edges of the diagram are equally distanced from the points that generate incident Voronoi cells.  So, using this definition, we can form the Voronoi cell of a point ${u\in{S}}$ by dividing the plane, for each other point ${v\in{S}}$, into two pieces(a.k.a. ‘half-spaces’) using lines that are equally far away from ${{u}}$ and ${{v}}$. Then, if we take the intersection of all the half-spaces that contain ${{u}}$, we will end up with ${{V_u}}$. Although the definition allows for building Voronoi diagrams for spaces with dimension greater than 2, it turns out that for computational purposes this is not feasible process, as it a huge amount of time and storage space.

Stereographic Projection

To plunge even deeper into topology, we present two more constructions for the Voronoi cells. To do this, we will go down one more dimension to create some visual intuition of the constructions, and then let the reader work his newly gained knowledge by extending the example to two dimensions. Use the picture bellow for reference. The Voronoi Diagram of a set ${{S}\subseteq \mathbb{R}^1}$ is nothing more than a segmented line, where the segments are the Voronoi cells(in our drawing, the Voronoi diagram of the set ${{S} = \left\{ {a,b,c,d}\right\} }$ is the segmented equator). We will lift our Vornoi cells to two dimensions by using stereographic projection. Intuitively, the stereographic projection of circle(sphere for two dimensions) is the shadow of its points cast by a single light on the surface of the circle onto a line(plane in two dimensions). By casting shadows, we mean picking a point ${{N}}$ and drawing lines from it that intersect both a point on the circle and the line . This creates a mapping( an isomorphism, if you will), between the circle minus the point ${{N}}$ and the line. So, we draw a circle that has our space as its equator, and take the north pole as our point ${{N}}$. Using the inverse of the stereographic projection of the circle(i.e. we start from points on our space and draw lines that go through ${{N}}$ and intersect the circle(the brown lines in our drawing)), we map our set ${{S} = \left\{ {a,b,c,d}\right\} }$ onto the circle. For the first construction, we will draw tangent lines to the circle at the newly obtained points. These are the Pa,Pb,Pc,Pd in our drawing. Look carefully at their intersections. It seems that if we were to draw a line connecting the intersecting points and ${{N}}$, we would intersect the line at the boundary of the Voronoi cells of the points that yielded the tangent lines. This turns out to be true. It is also true that if we draw lines from ${{N}}$ to the stereographic projection of the points in our space, the line which we will first intersect, corresponds to the Voronoi cell to which the point belongs. For our second construction, we can build circles(Ca,Cb,Cc,Cd) that contain both ${{N}}$ and the points in ${{S}}$ and that are tangent to the equator. Take a moment to convince yourself that these circles are unique. Again look at the points of the intersections of the circles and remark that a line containing ${{N}}$ and any of these points will also contain the boundary of two Voronoi cells. Thus, the first intersection of the segment connecting a point in our space and ${{N}}$ is with the circle corresponding to the Voronoi cell to which the point belongs.

How to construct Vornoi cells using lifting

Having our terms all sorted out, it becomes apparent what the second definition means. Broadly, it tells us that by drawing edges between the points that generate two neighboring Voronoi cells, we will get a Delaunay Triangulation of the set of points. Observe that the edges we are drawing to geometrically represent the nerve of the Voronoi diagram are always perpendicular to the boundaries of the cells. Just try for a moment to draw a Voronoi diagram that yields triangles with small angles, and you will begin to see why this property maximizes the minimal angle. The only problem would be having more than 4 cells meeting at a point. As it turns out, this is equivalent to four points lying on the same circle, which, as we’ve stated in the first part, makes for having more than one Delaunay Triangulation.

These constructions are less intuitive and you might ask why do we need them when we already have something as simple as the first definition of a Delaunay Triangulation. The reason is that there exist algorithms such as Fortune’s Algorithm or Bowyer-Watson Algorithm that generate Delaunay Triangulations using Voronoi cells. These algorithms prove to be more useful as they can create Vornoi diagrams which have multiple applications across a vast array of domains.

Delaunay Triangulation derived from a Voronoi Diagram

Now that we have a strong theoretical basis, let us focus on one algorithm for creating a Delaunay Triangulation. Note that there are a lot of algorithms for doing this, but the one that we will be presenting is one of the simplest and most time efficient. The algorithm that we choose to implement employs the “divide and conquer” strategy.

Let us start with a set of random points ( we will work with an example that has 12 points to gain a better intuition). We begin by ordering them with respect to their x-coordinate. Also, if two points have the same x-coordinate, their ordering is determined by their y-coordinate in increasing order.

This our example of 12 points with the labeling imposed by our ordering.

We continue by subdividing the points into halves using their labeling, until we are left with sets of two or three points. If we can’t split a set into two equal sets, we let the first subset to have one more element then the second. We then triangulate the newly formed sets by creating triangles in the case where we are left with three vertices, and edges in the case where we are left with two vertices.

The way we divide the set of points and the triangulations we can form. The thickness of the black lines indicates the order in which we divide starting from thickest

By taking a look at the picture, we can observe that we obtain a bunch of triangles with sharp angles which should give us an idea that most of the triangles formed in these first steps are not part of the Delaunay triangulation. However, as we have discussed earlier, the triangles in a Delaunay triangulation tend to be small, so it is very likely that some of the edges coincide. For a large number of vertices, this a huge time saver.

The next step is to merge neighboring sets of points by adding edges in the opposite order in which we have divided them. We will use the following notation to distinguish between edges: LL are edges that were formed using vertices from the left division, RR are edges that were formed using vertices from the right division, and LR are edges that were formed using vertices from both. We will try to now merge the first two divisions. For now we only have LL and RR edges. Throughout the process we will delete LL and RR edges but never add more, and we will try to add LR edges that are as good as possible. Remember that union of the triangles in a Delaunay triangulation of a set of points forms the convex hull of that set of points. Thus, we can safely assume that our first LR edge, which will certainly be in the final triangulation, is the edge between the two points with the smallest y-coordinate of the two divisions. Observe that this is also true for the upper part of the set of points.

Now it’s time to look for the next LR edge that can only be added above the previously created LR. To get a triangulation, we need this edge to start at one of the vertices of the previously added edge. Thus, we are looking for an vertex that forms with the previous LR edge a triangle whose circumcircle does not contain any other points. By the existence of a Delaunay triangulation and by its uniqueness(if we don’t have four co-circular points) we are guaranteed that such a point exists and that it is unique. We will look at the possible candidates and pick the best one from each side and then the best one from the two. Let us start with the right side. We will look at the vertices that share an edge with point 5 and pick the ones that form the two smallest angles with the previously added LR edge. The vertex associated with the smallest angle will be the Current Potential Candidate(CPC) and the other vertex will be the Next Potential Candidate(NPC). If our CPC fullfils the two following criteria we found our candidate for the right side :

• The clockwise angle from the base LR-edge to the potential candidate must be less than 180 degrees. (This ensures that we have traingles and that we don’t go outside the convex hull)
• The circumcircle defined by the two endpoints of the base LR-edge and the potential candidate must not contain the next potential candidate in its interior. (This is the Dealunay criterion that no circumcircle contains a point in its interior)

If our CPC doesn’t fullfill the criteria, we set CPC to NPC and then get the next NPC by looking at the next vertex in order of the angle mentioned above. We repeat this process until we either find a good candidate or we run out of candidates. If we do run out of candidates, the best candidate from the other side will win. If neither side has a candidate, then our merge is complete.

The process for the right side is similar, but instead of point 5 we use point 2 and we measure the angle at point 2 counter-clockwise. If we candidates from both sides, we repeat the circle test and pick the point for which the other point is not in the circumcircle. Be the existence of the Delaunay triangulation, we are guaranteed that such a point exists.

Once we get a “winner” vertex, we delete any vertex that intersects the newly formed LR edge.

We repeat this process until we have completed the merge, then move on through all the merges until our triangulation is complete. Note that any step of the way, we can delete previously added edges, so don’t assume that any of the created LR edges will be part of the final triangulation.

I will display the process for the first merges and then just display the result of the second and third merges:

Vertex 3 and vertex 4 are the candidates from each side, but vertex three obviously wins

We add the new edge and repeat the process

Right side:
We start with 4 but fail the test so 6 is selected.
Left side: we can’t pick 2 because the angle is more than 180, but 1 works.
Of the two candidates 6 wins.

We add the new edge. Note the deletion of the RR edge.

Observe that we have 4 co-circular points, so we can pick either 34 or 16 as our edge

We add the final two edge. The last one doesn’t require any computation.

The finite Delaunay Triangulation of our set of points. A true beauty!

Triangulations seem like an easy mathematical concept, but things get a lot more complicated when you only have simple instructions at your disposal. Next time you want a brain teaser, try to forget everything you have just read, and attempt to create an algorithm that triangulates ( not even a Delaunay Triangulation) the convex hull of a set of points. You will see how things that you can easily do with a pen and a piece of paper become a lot harder to translate into instructions that work for all cases and that a machine can understand. That’s why computer science requires a vast knowldge  of mathematical concepts that are able to describe the same object or process into different modes, allowing the programmer to pick a method that can be transformed efficiently into code.

This entry was posted in Uncategorized. Bookmark the permalink.

• The Long Line

Topology can be best described as the study of certain “spaces” and the properties they have. Now it is important to figure out what spaces are essentially the “same” and which are different. We define two spaces to be the “same” if we can transform one into the other continuously, and the transformation we preformed can be undone continuously as well. Now you may be asking, “what exactly do you mean by a ‘space’?” A topological space is defined as a set $X$ with an associated set $\mathscr{T}$ consisting of subsets of $X$ which satisfies certain properties. The elements of $\mathscr{T}$ are declared to be the open sets of our topological space $(X,\mathscr{T})$.

Now a major aspect of topology is to define properties that different spaces could have and what these properties should “say” about a space. For example, one may want to determine when a space is connected together or when it can be broken up into different pieces. Or one may wish to determine when any two points in a space can be connected together by a path through the space. Now at first glance these two definitions seem to be describing the same property but in fact they aren’t. The first definition describes when a space is “connected” and the other when a space is “path-connected.” It turns out that “path-connected” is a stronger claim about a space than just “connected.” In other words, there are space which are connected but not path-connected, for example The Topologist’s Sine Curve . However every path-connected space is also connected. One may begin to wonder what other properties in topology share this type of connection. In other words, which properties imply other properties and which do not. Now to show that one property implies another, one must start from the most general assumptions and come up with some mathematical proof. However to show that some property does not imply another, one must simply come up with a counter example. It is the later strategy that this blog is concerned with. I will be discussing a particular topological space, “the long line,” that can be used as a counter example to certain properties of a space, namely different levels of “compactness.”

Before we begin discussing the long line it will be useful to have an overview of an order topology. Now on the real line $\mathbb{R},$ the order topology ends up coinciding with the usual definition of open set, namely the union of open intervals. The key property of the real line that allows us to have this definition of open intervals is that it is totally ordered. This simply means that given any two points $a,b \in \mathbb{R}$, either $a\leq b$ or $b\leq a$. It turns out that we can define an order topology on any set which is totally ordered by some relation $\leq.$ Given a totally ordered set $(X,\leq)$ we can define an order topology on $X$ by first letting $\mathscr{T}''$ be the set consisting of all sets of the form $\{y : y < a\}$ and $\{y : b for any $a,b \in X$. We then create another set $\mathscr{T}'$ of all the finite intersection of elements of $\mathscr{T}''$ and then let our topology $\mathscr{T}$ be all the sets which can be expressed as the arbitrary union and finite intersection of elements of $\mathscr{T}'$. The reader should think about this construction in terms of the real line and note that we end up producing all the open intervals and unions of open intervals of the real line. Refer to the image below to see how the construction comes together.

Example of the creation of an open interval with elements of T”.

We are almost ready to construct the long line, but first we must make one more detour, into the world of set theory. The principle object we will use in our construction of the long line is the ordinal. Ordinals are just a very special type of well ordered set. Now a well ordered set is very similar to a totally ordered set with the additional property that every non-empty subset has a minimum element. An amazing fact from set theory is that every set can be well ordered. It turns out that the ordinals can be thought of as the standard well ordered sets. In fact, every well ordered set can be put into a bijective correspondence which preserves order with a unique ordinal. Now you may be wondering what’s so special about these ordinals. An ordinal  $X$ can be defined as a well ordered set with the property that each element $a \in X$ is exactly the set of all elements in $X$ which precede $a$. In other words, $X$ is an ordinal if for every element $a \in X$, $a = \{x \in X : x. The you may be wondering if such a thing even exists. Refer to the images for a glimpse of the finite and the first few infinite ordinals. You may be interested in a more thorough description of ordinals, which can be found here.

The first few finite ordinals.

The first few infinite ordinals.

Now it turns out that inclusion will always be the well order on an ordinal. It also turns out that there are a lot of ordinals. In fact the collection of all ordinals is not even a set! Naively we can think that there are just too many ordinals for them to be a set. Now as seen above there are ordinals with an infinite number of elements. In fact the first infinite ordinal can be used to make sense of the natural numbers. We often denote the first infinite ordinal as $w$. Amazingly there are different sized infinities, and $w$ can be thought of as the “smallest” infinity. Sets which can be put in a bijective correspondence with it are deemed “countable.” The next size of infinity is known as “uncountable” and we will let $\Omega$ stand for the first uncountable ordinal.

After the long build up we are finally ready to define the long line. The long line can be thought of as taking uncountably many copies of the interval $\lbrack 0,1)$ and “stacking” them end to end. For comparisons sake, we can think of the positive real line as countably many copies of the same interval “stacked” end to end. The long line must be very long indeed! While this definition may provide a good image, it leaves little to work with as far as properties go. Here is a more precise definition, the long line $L$ is the cartesian product $\Omega \times \lbrack 0,1)$ where the elements are ordered lexicographically. In other words, given two elements $(\alpha, a)$ and $(\beta,b)$ with $(\alpha, a)\leq (\beta,b)$, either $\alpha \leq \beta$ or in the case of equality, $a \leq b$. It is easy to see that this is a total order and so we can construct the order topology on $L.$ While this definition may seem strange, it is actually very easy to visualize. Take the set of all non-negative real numbers as an example. We can think of this set as the cartesian product $\mathbb{N} \times \lbrack 0,1)$ (keeping in mind that the natural number $\mathbb{N}$ can be fully described by the ordinal $w$). First note that we can think of elements $(n,d)$ of our set as telling us first the integer part of the number and then the decimal part. Now I ask the reader to consider how they would compare the size of two different positive real numbers. First you would compare the integer parts, and if they were equal you would then move on to the decimal parts. That’s comparing lexicographically! So the long line $L$ is just a much “longer” version of that example. The image below provides a description of our analogy to the long line, unfortunately it is very difficult to create a visual for an uncountable well ordered set.

The real line as a cartesian product.

Now you may wonder, “how much longer is the long line?” Perhaps the best way to compare the “length” of these lines is by looking at sequences. It is common knowledge that any strictly increasing sequence of real numbers does not converge. This is easily seen and accepted and it may be easy to conclude the same fact about the long line as well, however that would be a mistake.

Lemma 1: Every increasing sequence converges in the Long Line.

Proof: Suppose $(x_n)$ is an increasing sequence in the long line. Now consider the first element of each term of our sequence (remember elements of the long line are doubles, the first being the ordinal, and the second being the decimal part). Let $(\alpha_n)$ be the sequence of first elements. Therefore $(\alpha_n)$ is an increasing sequence of ordinal numbers. Now I will present it as fact that every increasing sequence of ordinal numbers has a limit point. Also $\Omega$ can never be the limit point of a sequence of countable ordinals. Therefore $(\alpha_n)$ must converge to a countable ordinal and therefore an ordinal that is represented in the long line. Now, if $(\alpha_n)$ never reaches a point where it remains constant, then we never have to consider the decimal part of the sequence since the limit point of the sequence of ordinals together with 0 will be the limit point of our sequence. So suppose that eventually $(\alpha_n)$ becomes a constant sequence. Let $\alpha'$ be the eventual constant term. Now we will consider the sequence of decimal parts of all terms after the sequence becomes constant in terms of the ordinals. Let $(d_m)$ be this sequence. Since $\lbrack 0,1)$ is bounded and $(d_m)$ must be increasing, it is easy to conclude that it converges, possibly to 1 in which case we take the point $(\alpha'+1,0)$ as our limit point for the original sequence. Therefore we can conclude that every increasing sequence converges in the long line. $\square$

Now amazingly with this one fact we glean even more information about the long line. For instance the long line is sequentially compact. First a quick definition, a topological space is sequentially compact if every sequence in the space has a convergent sub-sequence. Before I prove this I will prove a quick lemma about sequences in a totally ordered set.

Lemma 2: Every sequence in a totally ordered set has a monotone sub-sequence.

Proof: Suppose $(X,\leq)$ is a totally ordered set. Now let $(x_n)$ be a sequence in $X$. Let $x_p$ be a peak of the sequence if for all $n\geq p : x_n \leq x_p$. Now clearly there are two cases to consider, either $(x_n)$ has infinitely many such peaks or it has finitely many. First suppose that $(x_n)$ has infinitely many peaks, $\{x_{n_1}, x_{n_2},\ldots \}$. Therefore we can take $(x_{n_p})$ as our sub-sequence and clearly by definition this sequence must be decreasing. Now suppose our sequence has only finitely many peaks. Therefore there is a last peak $x_{n_{0-1}}$. Now consider the term $x_{n_0}$. Since this term is not a peak there must exist another term $x_{n_1}$ which is greater than $x_{n_0}$. We can then find a term $x_{n_2} \geq x_{n_1}$ since $x_{n_1}$ was not a peak. We can continue this process, creating a sub-sequence $(x_{n_m})$ which is increasing. $\square$

Now we are ready to prove that the Long Line $L$ is sequentially compact.

Proof: Suppose $(x_n)$ is a sequence in $L.$ By the second lemma we know we can find a monotone sub-sequence $(x_{n_m})$. Now first suppose that $(x_{n_m})$ is increasing. Then by the first lemma $(x_{n_m})$ converges. Now suppose that $(x_{n_m})$ is decreasing. Now since the long line is bounded below we know that $(x_{n_m})$ must converge. So $(x_n)$ has a convergent sub-sequence. $\square$

So the long line is sequentially compact. Of course the next question to ask is if the long line is compact. Interestingly it is not. Now the topological definition is a little different then the one given in most calculus classes. A topological space $X$ is compact if every open cover $\mathscr{U}$ of $X$ has a finite sub-cover. Now a cover is just a collection of open sets where the union of the collection is equal to the entire space.

Lemma 3: The Long Line $L$ is not compact.

Proof: Consider the collection of open sets $\mathscr{U'} = \{ (\alpha, \alpha+1) : \alpha \in \Omega \}$. We can then add to this collection sets of the form $((\alpha,\frac{2}{3}),(\alpha+1, \frac{1}{3}))$ giving us a cover $\mathscr{U}$. To see that this is a cover just note that the only points $\mathscr{U'}$ only misses the points with no decimal part, and note that the added collection of sets catch all the ordinals with no decimal part. Finally note that if any set of $\mathscr{U}$ is removed, we will no longer have a cover of $L$ and since $\mathscr{U}$ is clearly infinite we can conclude that $L$ is not compact. $\square$

This then leads us to one more interesting aspect of the long line; the long line is not metrizable. Now before I can explain what a metrizable space is, I will give a brief description of a metric space. A metric space is a set together with a “distance” function that determines how far away two points are. Open sets are then the union of open balls, which is just the set of all points strictly less than some radius from a given point. For instance, the usual distance function, $d$, on $\mathbb{R}$ is just $d(a,b) = |a-b|$. Open balls then correspond to sets of the form $\{x\in \mathbb{R} : |a-x| < \epsilon\}$. Now a metrizable space is a topological space where one can create a distance function on the set that then creates precisely the same open sets that were in the original topology. Now metric spaces, and therefore metrizable spaces, usually have “nicer” properties than general topological spaces, especially when it comes to equivalent features. The property that interests us here is than in a metric space, and so a metrizable space, the concept of sequentially compact and compact coincide. In other words, if a metric space is compact, then it is sequentially compact, and likewise in reverse. So immediately we can see that the long line cannot be metrizable since it is sequentially compact but not compact. So it would be impossible to create a “distance” function, which made sense, on the long line which lead to the construction of all the open sets we have.

Now you may be wondering what’s the point of creating the long line. You may ask yourself why anyone should care. Aside from just being able to work with a weird topological space, the creation and examination of the long line has multiple benefits. First off, it gives us a concrete counter example to properties that seem so similar. If all you ever work with are metrizable spaces, you won’t ever be able to really see how sequentially compact and compact are different. The long line is just one example of the importance of searching for counter examples. One may think that it is possible for spaces to have some properties and not have others, but until a concrete counter example is created, it’s all just conjecture.

For more information on the long line refer to Counter Examples in Topology, Steen and Seebach.

Editor’s note: The author of this post Josh Hews was a student in the Fall 2014 Topology course at Colby College taught by Scott Taylor. Submissions to the blog of essays by and for undergraduates on subjects pertaining to geometry and topology are welcome. For more information see the “Submit and Essay” tab above.

This entry was posted in Uncategorized. Bookmark the permalink.

• Constant Sequences With Infinitely Many Limits

Posted on November 18, 2014 1:23 pm by formatted with Image Comment

Back in calculus, we were taught that any convergent sequence (${x_n}$) in ${\mathbb{R}}$ has exactly one limit. This statement is true, and you might have seen the proof before. But is this necessarily a case in non-${\mathbb{R}}$ spaces?

It turns out that this is still true in some non-${\mathbb{R}}$ spaces, but, as we will see later, not all.

But what are spaces where things behave strangely? First, let’s consider the following definition:

Definition: a space H is Hausdorff if for any two distinct points you pick, there are disjoint open sets containing them.

An example of a Hausdorff space is ${\mathbb{R}}$ (the real line). By a standard fact, we know that for any two distinct real numbers x and y, we can find disjoint open intervals containing them (another way to think of this: suppose that the distance between x and y is d. Start by letting ${{I_x}}$ be an open interval centered at x with radius d/2 and let ${{I_y}}$ be an open interval centered at y with radius d/2 as well. It is easy to observe that these two intervals are disjoint.).

It turns out that in every Hausdorff space, a convergent sequence has exactly one limit (as we will prove later). On the other hand, in a non-Hausdorff space, a constant sequence (which obviously converges) can have infinitely many limits!

It might be hard to image what non-Hausdorff spaces look like, so we will begin by introducing the concept of topological spaces.

Given a set X, (X, T) is a topological space exactly when these four conditions hold:

1. X ${\in}$ T

2. ${\emptyset}$ ${\in}$ T

3. If a finite number of sets are elements of T, then their intersection ${\in}$ T

4. If we have sets that are elements of T, then their union ${\in}$ T

and T is called a topology on X (don’t confuse it with topography). Elements of T are exactly open sets in (X, T).

Observation: A set can be both closed and open. By the definition, C is a closed set exactly when the complement of C is open. For any given set X and topology T on X, since X and ${\emptyset}$ are in T, they are open sets in the space (X, T). Thus, their complements, ${\emptyset^c}$ = X and ${X^c}$ = ${\emptyset}$ are both closed.

The picture below represents a (X, T) topological space, where X = {1,2} and T = {ø, {1},{2},{1,2}}. It is easy to check that this is a Hausdorff space.

Before we move on to examples of sequences in non-Hausdorff topological spaces with multiple limits, let’s recall the definition of a limit of a sequence.

Definition: x is a limit of sequence (${x_n}$) if and only if for any open set U containing x, eventually there is a term of (${x_n}$) in U such that all the succeeding terms of (${x_n}$) are in U as well.

2 Examples are given here:

Example 1:

Let F be {1, 2, 3} and T = {${\emptyset}$, {1, 2}, F}. T satisfies all the 4 conditions, so it is a topology on set F. Obviously, (F, T) is not Hausdorff. Consider 1 and 2. Open sets containing 1 are exactly those that contain 2. Thus, regardless of your choice of the pair of open sets containing 1 and 2, the intersection is never empty.

Let (${x_n}$) be a constant sequence such that ${x_n}$ = 1 for every n. For any open set containing 1 (there are 2 of them: ({1,2} and F), every term of (${x_n}$) is in the open set, thus, (${x_n}$) converges to 1. On the other hand, since open sets containing 1 are exactly ones that contain 2, (${x_n}$) also converges to 2. It is easy to see that the open set containing 3 also contains all terms of (${x_n}$). Thus, the set of limits of (${x_n}$) is {1, 2, 3}.

Example 2:

Let’s consider a more interesting example. Let X = ${\mathbb{N}}$ (the set of all natural numbers), and let T = {${\emptyset}$, X}. Obviously, T is a topology on X (${\textbf{note:}}$ for any set X, T = {${\emptyset}$, X} is called an indiscrete topology on X). Clearly, this is not Hausdorff.

Let (${x_n}$) be a constant sequence such that ${x_n}$ = 1 for every n. Let j be an arbitrary natural number. Since every open set in the space (X, T) containing j contains all terms of (${x_n}$), j is a limit of (${x_n}$). Thus, every natural number is a limit point of (${x_n}$) in (X, T). This points out that the set of limit points does not even need to be finite!

Intuitively, the result of this example may seem contradictory, since the given sequence is constant but is also “arbitrarily” close to infinitely many points. This shows that things could behave bizarrely in non-Hausdorff spaces.

Convergent sequences and their limits in Hausdorff spaces

After seeing these two examples, one might wonder if a convergent sequence in a Hausdorff topological space always has exactly one limit. It turns out that this is true.

Let (X, T) be a Hausdorff topological space and (${x_n}$) be a convergent sequence in the space, we’ll show that it has exactly one limit.

We’ll prove our claim by contradiction. In other words, we’ll assume that our claim is false and show that such assumption can’t be true.

For contradiction, suppose that (${x_n}$) has two distinct limits x and y in X. Since the space is Hausdorff, there are open sets ${U_x}$ and ${U_y}$ containing x and y respectively, and the intersection ${U_x}$ ${\cap}$ ${U_y}$ = ${\emptyset}$ Since (${x_n}$) converges to x, there is ${x_{n_k}}$ ${\in}$ ${U_x}$ such that all the suceeding ${x_n}$ are in ${U_x}$.

Similarly, since (${x_n}$) converges to y, there is ${x_{n_m}}$ ${\in}$ ${U_y}$ such that all the suceeding ${x_n}$ are in ${U_y}$. There are 2 cases:

case1: ${n_{k}}$ ${\geq}$ ${n_{m}}$ ${\longrightarrow}$ ${x_{n_k}}$ is in both ${U_x}$ and ${U_y}$

case2: ${n_{k}}$ ${\textless}$ ${n_{m}}$ ${\longrightarrow}$ ${x_{n_m}}$ is in both ${U_x}$ and ${U_y}$

Thus, ${U_x}$ ${\cap}$ ${U_y}$ ${\neq}$ ${\emptyset}$, which is a contradiction.

This points out that our assumption is false, and (${x_n}$) must have exactly one limit in X

Since the space and the sequence are arbitrary, any convergent sequence in a Hausdorff space has exactly one limit.

Convergent sequences and their limits in “weakened” Hausdorff spaces

In mathematics, changing hypothesis can drastically change the result, and the idea that we are exploring here is not an exception. Although a convergent sequence in any Hausdorff space has the unique limit, a convergent sequence in a “weaker” Hausdorff space can have many limits!

Definition: a space H is T1 if for any two distinct points, each has a neighborhood which does not contain the other point.

We can observe that this is weaker than the definition of a Hausdorff space, since the neighborhoods are not required to be disjoint. If for each pair of points, there are neighborhoods that happen to be disjoint, then this T1 space is also a Hausdorff space. However, that does not have to be a case. Obviously, every Hausdorff space is also T1, but not the other way around.

But how does using a weaker condition affect the behavior of convergent sequences? As we have seen, a convergent sequence in any Hausdorff space has exactly one limit, and a convergent sequence in a non-Hausdorff space may have many limits, but what can we say about a convergent sequence in a T1 space?

It turns out that it is possible to find a T1 space where a convergent sequence has infinitely many limits! Let’s consider the following example: space (X, T) when X = ${\mathbb{N}}$ and T = {${O}$ $\subset$ ${\mathbb{N}}$${O^c}$ is finite} $\cup$ {$\emptyset$}

Proving that this is a topological space is not hard, and we will skip it to discuss more interesting results.

First, we will show that (X, T) is a T1 space. Let a,b be distinct points of X. Since X\{a} and X\{b} have finite complements, they are elements of T. Given that X\{a} contains b but not a and X\{b} contains a but not b, (X, T) is a T1 space.

Consider a sequence (${x_n}$) in (X, T) where ${x_n}$ = n for each n ${\in}$ ${\mathbb{N}}$. We will show that every natural number is a limit point of (${x_n}$). Let j be an arbitrary natural number and let ${U_j}$ be an open set containing j. Since (${U_j}$) is an element of T, the complement of (${U_j}$) is finite. In other words, C(${U_j}$) is either ${\emptyset}$ or {${n_1}$, ${n_2}$, …, ${n_k}$} (where these ${n_j}$ are natural numbers). Without loss of generality, we can assume that, in the second case, ${n_k}$ is the biggest element of such finite set.

If C(${U_j}$) is ${\emptyset}$, then ${U_j}$ contains all the natural numbers. Thus, it contains every term of the sequence (${x_n}$).

On the other hand, if If C(${U_j}$) is {${n_1}$, ${n_2}$, …, ${n_k}$}, then ${U_j}$ contains all n > ${n_k}$. Equivalently, for all n > ${n_k}$, ${x_n}$ is contained in ${U_j}$.

${U_j}$ is arbitrary, so (${x_n}$) converges to j.

But j is also arbitrary, so the sequence (${x_n}$) converges to all natural numbers.

Question:

We’ve seen that every convergent sequence in any Hausdorff space has exactly one limit and that there are non-Hausdorff spaces where a convergent sequence can have more than one limit. Is it possible to have a convergent sequence in a non-Hausdorff space with exactly one limit?

Saran was a student in Scott Taylor’s Fall 2014 Topology course at Colby College.

References

http://math.stackexchange.com/questions/901154/unique-limits-in-t1-spaces

http://en.wikipedia.org/wiki/Interior_(topology)#mediaviewer/File:Interior_illustration.svg

https://dragonflytraining.files.wordpress.com/2013/10/man-with-question-01.png

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

• Periodic Billiard Paths

“When are we actually going to use this in real life?” A student complains to his or her math teacher. It is the question that we’ve all heard, at one point or another, in our high school years. When we are young, it can be difficult to see how mathematics is related to the ‘real world’. As we grow older, those of us who continue to study mathematics realize that one of the beautiful things about mathematics (and geometry in particular) is that it is present all over the place. This can be seen, for example, in the fact that the golden ratio and the Fibonacci numbers show up, well, almost everywhere. It should be no surprise, then, that almost anything can be studied using mathematics. Here we will see how to take a simple game of skill, billiards, and use geometry to study the game mathematically.

Let’s get acquainted with the ‘rules’ of mathematical billiards, which are somewhat different from the game with which many of us are familiar. First off, we play with only one infinitesimally small billiard ball. For this reason, our goal is not to hit other billiard balls into pockets; rather, it’s to see how our ball travels once we’ve set it in motion. We also play on a frictionless table and require all collisions to be elastic (this means that energy is conserved during each collision; i.e. the ball doesn’t ever slow down). So, once our ball starts moving, it doesn’t stop. The only exception is that we say the ball has stopped if it lands precisely in a corner. Our final rule change is that we can make our billiard table whatever shape we want; later on we will discuss what shapes behave ‘nicely’ as billiard tables.

There is also an important law of physics that our ball must obey; in fact, this law is also present in real billiards. We require that when the billiard ball strikes an edge of our table that

angle of incidence = angle of reflection

That is, the angle the ball’s path (prior to collision) and the edge of the table make is equal to the angle the ball’s path (after the collision) and the edge of the table make.

One of the interesting open questions in the study of mathematical billiards is, given a particular shape as our billiard table, whether it is possible to find a billiard path that is periodic; that is, one which repeats itself over and over again. A billiard path is simply the path that a billiard ball takes once it is set in motion. Here are some simple examples of periodic billiard paths. Notice that the table does not necessarily have to have straight edges.

In the case of the circle, notice that the path is periodic because it bounces perpendicularly off the edges; this is a common way to find periodic billiard paths, as we will see. We will, however, consider only polygonal billiard tables from here on. It is unknown whether every polygonal table has a periodic billiard path, and the solution to this question is an active area of research. R. Schwartz has recently proved that every triangular table with largest angle no greater than 100 degrees has a periodic billiard path. So, what other polygons are known to have periodic billiard paths? We will look at a particular class of polygons called rational polygons and prove the following theorem:

Theorem: Every rational polygon has a periodic billiard path.

A rational polygon is one whose angles are all rational multiples of $\pi$. It’s easy to get lost in the subtleties of a full proof of this theorem, so we will not discuss every detail. If you are interested in seeing the details and a more rigorous discussion, most of what follows can be found in chapter 17 of Richard Schwartz’ book, Mostly Surfaces.

The first detail we need to discuss is what we mean by a translation surface. We will demonstrate this concept with a simple example. Consider an isosceles triangle $T$ with angles $\frac{\pi}{4}, \frac{\pi}{4},$ and $\frac{\pi}{2}$. Let’s glue the congruent edges together. We call the ‘glued up’ space $\bar{T}$. $\bar{T}$ is an example of what we call a cone surface. Now, pick a point $x \in T$ that is a vertex of $T$. Then we call $\bar{x}$ (the point of $\bar{T}$ that $x$ became after gluing) a cone point. In our example, we have two cone points since two vertices of our triangle were glued together. For each cone point, we define the cone angle to be

cone angle of $\bar{x}$ = $\sum\limits_{x \in \bar{x}}$ angle at $x$.

The summation is over all points in $T$ which are glued together to form $\bar{x}$. So, in our example, the cone angle of each cone point is $\frac{\pi}{2}$ because we glued the two vertices that have angle $\frac{\pi}{4}$ together, and the vertex with angle $\frac{\pi}{2}$ is not glued to anything but itself. Now, we can finally give a definition of a translation surface: A translation surface is a cone surface such that each cone angle is an integer multiple of $2\pi$. We will not be able to prove this here, but what we need to know in order to proceed is that for any rational polygon $P$, we can take finitely many copies of $P$ and come up with gluings of pairs of edges so we obtain a translation surface. In general, we cannot do this for polygons which are not rational; as we will see shortly, this is the reason why we can only prove that rational polygons have a periodic billiard path.

Here’s an easy example of how we can construct a translation surface from a rational polygon. Let’s begin with one of the simplest examples of a rational polygon: a square. Call our square $S$. Let $e_i$, $i = 1,2,3,4$ be the edges of $S$ as shown to the left.

For each $e_i$, let $R_i$ be the reflection over that edge. If $e_i$ and $e_j$ are parallel, then $R_i = R_j$ (this isn’t exactly true, but hold on!). So, in our example, $R_1 = R_3$ and $R_2 = R_4$. Let $G$ be the group consisting of all possible compositions of $R_1$ and $R_2$. We can see that $G = \left\{ {I, R_1, R_2, R_1 \circ R_2} \right\}$ since any other composition gives us something already in $G$. Note that the order of $G$ is 4; in particular, $G$ is a finite group! This is very important: going through this process with any rational polygon will yield a finite group. If we generate a group this way for a polygon which is not rational, then the group will not necessarily be finite, and if that happens we cannot continue. We make ‘copies’ of $S$ by applying each of our group elements to $S$. We also shift our copies of $S$ so that they are all disjoint from each other. The fact that we shift is what allows us to say two reflections are equal if the corresponding edges are parallel. We now have four copies of our original square.

We begin with the obvious gluings: $e_1$ to $R_1(e_1)$, $e_2$ to $R_2(e_2)$, $R_2(e_1)$ to $R_1 \circ R_2(e_1)$, and $R_1(e_2)$ to $R_1 \circ R_2(e_2)$.

Now we have a bigger square whose edges we need to glue together. To do this, we will glue opposite edges together. For example, we glue $e_3$ to $R_1(e_3)$, $R_2(e_3)$ to $R_1 \circ R_2(e_3)$, etc. (For general polygons, there are equations for determining which gluings to make, but we won’t go into that). It is easy to verify that all cone angles are integer multiples of $2\pi$. The resulting surface is a torus, which is a doughnut or ring shape, and this is our translation surface. It’s important to note that we still think of our translation surface as being flat. Although our translation surface does look like the surface below, we prefer to visualize it as a flat polygon with with the gluings that we’ve defined.

We need two lemmas before we can tackle our theorem. Before we introduce our first lemma, however, we need a little background information. Define a path $\gamma \in \bar{P}$ (where $\bar{P}$ is a translation surface) to be straight if when we ‘unglue’ our translation surface, the path becomes a straight line. For example, the equator is a straight path on the Earth; although the equator is curved, it looks like a straight line on a map.

We need a way to go between our polygon $P$ and the translation surface $\bar{P}$ that we build from $P$. To accomplish this, we define a function $\pi : \bar{P} \rightarrow P$. $\pi$ takes as an input some point $\bar{p}$ and outputs the point on $P$ corresponding to $\bar{p}$ after we construct our translation surface.

Lemma 1: Let $\hat{\gamma}$ be a straight path on $\bar{P}$ which does not go through any cone points of $\bar{P}$. Then $\gamma = \pi(\hat\gamma)$ is a billiard path on $P$.

We won’t go through a full proof here. However, it is not so difficult to convince yourself of this result: take a piece of paper and fold it in half, creating a crease. Unfold it and draw a straight line which crosses the crease. This straight line is analogous to a straight path which crosses an edge from one copy of $P$ to another. When you refold the paper (folding the paper is analogous to hitting $\bar{P}$ with our $\pi$ function), you’ll see that the straight line now ‘bounces’ off the crease in accordance with the rule angle of incidence = angle of reflection. In the figure below, the dotted line is the image of the straight path after refolding the paper.

Now, for any $\bar{x} \in \bar{P}$, and for any direction on $\bar{P}$, define $f(\bar{x})$ to be the point of $\bar{P}$ obtained by starting at $\bar{x}$ and moving one unit in the chosen direction. $f$ is defined everywhere except when the path between $\bar{x}$ and $f(\bar{x})$ hits a cone point. If we think of our path as traveling at unit speed, we can say that $\pi(f^n(\bar{x}))$ is the location of our billiard ball on $P$ after $n$ seconds. We will now prove our second lemma:

Lemma 2: Pick a point $p \in \bar{P}$. Then there is a point $q \in \bar{P}$ that is very close to $p$ and $f^n(q)$ is very close to $p$ (for some $n$).

Proof: Let $D_0$ be a small disk around $p$, and for each $n \in \mathbb{N}$ let $D_n$ be a disk centered around $f^n(p)$ of the same size as $D_0$. Since the area of $\bar{P}$ is finite, the disks $D_0, D_1, D_2, \ldots$ are not disjoint. Hence we can find two disks, $D_i$ and $D_j$ that intersect at a point. Call this point $x_n$ and assume $i < j$. Then we can see that $D_{i-1}$ and $D_{j-1}$ intersect at the point $f^{-1}(x_n) = x_{n-1}$. We can continue this process until we see that $D_0$ and $D_{j-i}$ intersect at the point $x_0$, which is obtained by composing $f^{-1}$ with itself $(j-i)$ times and evaluating it at $x_n$. But then $x_0$ is very close to $p$ and also $f^{j-i}(x_0)$ is very close to $p$ (since $x_o$ is contained in the disks $D_0$ and $D_{j-i}$). This completes the proof.

We now have all the tools we need to prove the theorem. We let $P$ be any rational polygon, and we build a translation surface $\bar{P}$ from $P$. Pick a point $p \in \bar{P}$ and a direction that is perpendicular to and heading towards a nearby edge of $P$. $\hat{\gamma}$ will be a straight path which starts at $p$ and in the chosen direction. By our first lemma, there is a billiard path $\gamma$ in $P$ which corresponds to $\hat{\gamma}$. By our choice of direction, $\gamma$ is traveling perpendicularly to an edge of $P$ at time 0. By our second lemma, there is a $q$ very close to $p$ such that $f^n(q)$ is also very close to $p$ (for some $n$). Let $\beta$ be the path starting at $q$ and in our chosen direction. We make our choice of “very close” to mean that $q$ and $p$ are on the same copy of $P$ used in the construction of $\bar{P}$. Recall that $\beta$ is a straight path, and so $\beta$ is traveling in the same direction at times 0 (when it is at $\pi(q)$) and $n$ (when it is at $\pi(f^n(q))$). But at time 0, $\beta$ is traveling perpendicularly to an edge of $P$, since it is traveling in the same direction as $\gamma$. Hence, $\beta$ is traveling perpendicularly to an edge of $P$ at times 0 and $n$. Therefore, $\beta$ hits the edge perpendicularly twice, and so it periodic, since it just retraces its path each time it hits the edge perpendicularly. Such a path might look like this:

There are two natural questions to ask now. The first is “What about polygons that are not rational? Do these polygons not have periodic billiard paths?” The answer here is ‘no’, there are some non-rational polygons which do have periodic billiard paths. To convince yourself of this, notice that any parallelogram has a periodic path. In fact, there are no known examples of non-rational polygons which do not have a periodic billiard path. The second, and more difficult, question to ask is “Does this result hold in three dimensions as well? Is it true that rational polyhedra have periodic billiard paths?” The intuitive answer might be ‘yes’ because none of our rules change when we move up in dimension. It has been proven that the answer is in fact ‘yes’ for tetrahedra, but the result is unknown for more complicated polyhedra, like this one.

Sources

Mostly Surfaces by Richard Evan Schwartz

Obtuse Triangular Billiards II: 100 Degrees Worth of Periodic Trajectories by Richard Evan Schwartz

Images were either hand drawn or found using a Google Image search

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

• Completeness on an Incomplete Sphere.

I used to think I had a fairly good understanding of distance. If asked to find the distance between two points, I mentally connect them with a line segment and then find its length using the Pythagorean Theorem. Though I knew how to create fancy mathematical spaces, when it came to the real world I had my real sense of distance: straight lines connecting points.

Figure 1: When measuring the distance between cities, it doesn’t make sense to find the length along the straight line through the center of the Earth. Instead, we look at the length of the curve along the Earth’s surface.

Despite this intuition, I jettison this definition of distance when it doesn’t make sense. When I pulled out my ruler on Google Maps to find the distance between my hometown of Jay, Maine and Sydney, Australia, I found that there were over ten thousand miles between them. Quite impressive since the diameter of the Earth (the greatest distance between two points) is only about eight thousand miles. Clearly then, this “distance” according to Google is not just the linear displacement. In fact, as we see in Figure 1, this distance follows a path along the exterior of the sphere. Though different that our original idea of distance, this length is more useful if we were to plan a trip.

Suddenly, these fancy mathematical spaces with their unintuitive notions of distance didn’t seem quite so pointless. Depending on the purpose of the measurements, there is very good reason to draw something besides a straight line, or to even cause our sense of length to stretch as depending on our position. When working on my thesis, I encountered an interesting theorem by Paul Montel, which states that a set of analytic functions is normal when their range misses at least three points. In this proof, I was required to work with an entirely new sense of distance on this thrice punctured sphere.

In different contexts, we may define many different ways to measure distance (metrics) so that our measurements will be useful. One property we seek when defining a new metric is the idea of completeness. If we take a sequence of points, $( x_1, x_2,$…), completeness says that if the sum of the distances between each $x_n$ and the next one is finite, then the points must be getting arbitrarily close to some limit in our space. When traveling across the surface of the Earth, if we walk some finite distance and then stop we are always guaranteed to still be on the surface of the Earth. Completeness serves as an important property in real analysis. In fact, it is because we want completeness that we construct the real numbers from the rational numbers.

Figure 2: The sphere is projected onto the plane by drawing a line from N through a point P, sending P to where the line crosses the plane (P’).

If we cut out some small, closed region from our sphere, or even a single point, then our new metric space (under the same definition of distance) ceases to be complete. Consider if a country, on Earth, closed itself to foreigners. I would never be able to travel to this country, despite the fact that the distance between me and its border would be finite. If we want to consider the Earth, without this country, to be a complete system, we must redefine distance so that the distance from anywhere on our space to the border of this country is infinite. To illustrate how we might define such a distance, let us look at a sphere which has a single point $N$ removed from it. We may transform any point of the once-punctured sphere onto the Euclidean ($\mathbb R^2$) plane via the continuous bijection known as the Stereographic Projection (Figure 2). We then say that the distance between two points on the sphere, p and q, is the Euclidean distance between p’ and q’. This metric, it turns out, is complete on the once- punctured sphere. Through a similar process, we may induce a metric on the a sphere with two distinct punctures using the Eucledian metric (we won’t go into detail about that process here).

Figure 3: Starting on the disk, we see that we can connect sides to make a pair of pants.

Figure 4: The pair of pants can be deformed into a sphere with three holes.

So then, if we have a simple example of a metric on the sphere, and a way to induce a metric on the spheres with one or two punctures using Euclidean space, is it possible to find a complete  metric on the thrice-punctured sphere? The answer is yes… but not by relating our back to Euclidean space.

Consider the shape, S, formed by four arcs in a unit disk and its edge connecting them (Figure 3). Getting rid of the unnecessary portion of the disk, we may fold S over the x axis and connect the top two arcs to the bottom two. We then take any two points along these arcs sharing an x-coordinate and define them to be equivalent (”gluing” them together so to speak).  This gluing forms what we call a pair of pants. Note that this shape has three holes in it, suggesting a similarity to the thrice punctured sphere. In fact, we can stretch it so that it exactly resembles such a sphere (Figure 4). By making the arcs intersect the edge of the sphere closer to one another, we may shrink these punctures down to single points.

Recall how we defined a metric on the punctured sphere by sending points to the Euclidean plane. We now define distance on the thrice-punctured using the mapping from the sphere back into the disk. Note that, for any path on the sphere, we may unwrap the sphere back into $S$, giving us some collection of paths in the disk. We then define the length of a path by however long its segments are on the unwrapped space. The distance between two points, then, is the largest value which is shorter than the lengths of all these paths.

Figure 5: Showing how a path on the thrice-punctured sphere can be “unwrapped” back onto our original shape S. The length of the path then, from A to B, is the length of each of the segments of its inverse image in S.

This just means that we need to clarify our notion of distance on $S$, inside of the open unit disk. Now, we could use the Euclidean metric again, but this would still fail to give us a complete space. If I take the sequence $0, \frac 12, \frac 23, \frac 34,\frac 45$…, it should not be hard to see that the sum of the distances between consecutive points is just one. Indeed, the first $n$ many pairs of distances will just be the difference between zero and the $n+1$th term. As the sequence goes to one, the sum of these lengths goes to one. However, the fact that the sequence converges to one also causes a problem, as one corresponds with one of the punctures on our sphere. So then, we cannot have a complete sense of distance on the open disk using the Euclidean sense of distance.

Figure 6: Taking three line segments A,B, and C of equal Euclidean length, it turns out that the hyperbolic length of them is increased the closer they are to the boundary. The length of B is greater than that of A, while we say that the length of C is infinite, as it approaches the boundary.

This is where hyperbolic space comes in. On the unit disk model, we think of hyperbolic space as becoming squished near the boundary. Though the Euclidean distance between two points in the disk may only be some value d, if we slide them towards the edge of the disk (without changing their Euclidean displacement) then the hyperbolic distance between them approaches infinity. Though a severe over-simplification, think about taking a 6″ pen and holding it horizontally in front of your face. Though we might say that the pen has a constant Euclidean length, regardless of where you place it, if you slowly move the pen towards one of your eyes (while holding the other shut) the width of the pen appears to grow arbitrarily large, until it is wider than your field of vision.

It is this property that makes our space complete. Looking back at our previous example, of the sequence converging to one. In Euclidean space, the distances between consecutive terms added up to one. However, we now have that the magnitude of these distances gets amplified as they get closer to one. In fact, since the line segment from zero to one has infinite length in hyperbolic space, we infer that the sum of these distances (which are just the pieces of this line segment) must also grow arbitrarily large as we add them. We see then that this fixes the problem we had when we tried to apply a Euclidean manifold.

By repairing this issue, it turns out that our new space is, in fact, complete. The existence of a complete metric induced by the hyperbolic disk is implied by applying the Poincare’ Polygon Theorem to $S$. Some may wonder then, why am I making such a big deal about completeness? It turns out that many forms of advanced mathematics may only work to their full potential on complete spaces. My entire preoccupation with the thrice punctured sphere began when doing research for my honors thesis in Complex Analysis. Despite being a completely different field, I needed this definition of distance. The Schwarz–Ahlfors–Pick Theorem directly relates this metric to the idea of differentiability, allowing us to create differentiable functions on the open disk to this punctured sphere which don’t increase hyperbolic distance.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.