« Days since last Biblical plagu... | Home | New apartment »

Cycle-maximal triangle-free graphs

Thu 30 Oct 2014 by mskala Tags used: ,

In January of 2011, I had recently arrived at the University of Manitoba to work as a postdoc with Stephane Durocher. One of the first things he asked me to do was find out how many cycles there are in an n-dimensional hypercube graph, for general n. At the time, he and I both assumed that that meant spending maybe half an hour in the library looking up the answer.

Since then it's been more than three years; thousands of lines of computer code and months of CPU time; we added two more co-authors; we didn't solve the original problem, and didn't completely solve the other problem that we ended up working on, either; but here's a journal paper, anyway, and I think it's pretty interesting. The official definitive version will be free to access until mid-December and then will go behind Elsevier's paywall; the authors' accepted manuscript on arXiv is permanently accessible.

Read the paper if you're interested in the math details; in this entry I'm going to try to tell the story behind the paper, in relatively non-mathematical terms. I'm hoping it'll be of interest to my Web log readers and give you some idea, because I often get asked this, of what I actually do at work.

Graph theory is of some interest for its own sake, but Steph had actually come to this question from a more or less practical application, in local routing. When you have computers in a network, they have connections amonthem in some kind of pattern, forming a mathematical object called a graph, and they need to exchange messages over those connections. If a computer has a message for another computer in the network, but no direct connection to the recipient, then it has to pass the message to one of its neighbours and hope for it to be forwarded.

All the forwarding computers along the path the message takes need to be able to make the routing decision of which neighbour to pass the message to next. But this might be a "sensor network" (which is a buzzword in our business right now), or an ad hoc network of devices like cell phones talking to each other without a central tower. The individual computers in the network might not be very smart, and they might not have complete knowledge of the whole pattern of connections in the network. In particular, individual computers might have only local information about the network - maybe I know about my friends and my friends' friends, but not anyone more distantly connected than that. How to make routing decisions with only local information is an interesting research question, at least for the people I was working with at U of M.

That gets back to cycles in graphs. If there is exactly one way to get from anywhere in a graph to anywhere else, routing becomes in some sense easy; you never have to make any hard decisions. (This is a simplification of the more precise mathematical definitions, but:) Graphs that behave that way are called trees. On the other hand, a connected graph that is not a tree will contain one or more cycles: paths along which a message could travel such that it returns to its starting point without directly backtracking any of the individual links. The presence of cycles makes routing harder, even though it may be desirable for other reasons (for instance, increased reliability).

We can try to make routing easier by requiring the graph to have a specific minimum girth: a lower limit on the length of a cycle. For instance, maybe we require that the graph contains no cycles of three or four vertices. Then the girth is at least five. On the other hand, it might still contain many cycles that are just a little longer than the minimum, and those make routing harder. So if we're interested in what kinds of graphs could cause trouble for our routing scheme, then we'd like to know about graphs that are special with regard to these properties: large numbers of cycles despite having a specific girth. Those are graphs we'll really need to pay attention to when instructing the computers on how to route messages.

Hypercube graphs are easy to describe recursively, and they have some special properties with regard to routing, so they were the starting point. We had hoped for there to be some kind of simple recurrence relation for counting cycles in a hypercube - some way to relate cycles in n+1 dimensions to some combination of cycles in just n dimensions. If there were a relation like that, it would become relatively easy to count cycles over any number of dimensions.

It doesn't work because a cycle in n+1 dimensions could be split into lots of little pieces spread out between the two n-dimensional halves of the larger hypercube. What I found in the library (and it took somewhat more than half an hour) was that we only know the specific numbers for up to 7 dimensions, and one of the best-known papers on the 7-dimensional case is incorrect, too.

In fact, counting cycles in a general graph is a #P-complete problem. It is as hard as counting the certificates of an NP-complete problem. That in itself is interesting because the related decision problem ("Is there a cycle in this graph?") is dead easy, not known to be NP-hard. But, back to our story.

It took me several months to write the software for counting cycles, which I called ECCHI (for "Enhanced Cycle Counter and Hamiltonian Integrator"). That wasn't full-time, just a background project while most of my research was on other things. I'm still hoping to eventually turn ECCHI into a relesable software package that other people can use; that may be a long way down the road, though.

I was able to reproduce all but the largest known counting results on hypercubes, but not to push them to the next level. The 8-cube is just too big. So, looking for other ways to use this tool, I started looking at simply searching for graphs with lots of cycles and fixed girth by brute force. I used the well-known nauty and its associated tools to generate all the graphs of a given girth, used ECCHI to count their cycles, and retained the ones with the greatest count. That works up to maybe 10 or 12 nodes in the graph. With some additional wrinkles, it can be pushed a little further.

Pablo Picasso supposedly said about computers, "But they are useless. They can only give you answers." That was more or less the situation: the brute-force cycle count gave us a list of all the graphs with most cycles for each fixed girth and number of vertices, but no insight into why those particular graphs were the winners.

I spent some time drawing pictures of all the graphs that had come out of the computer search, trying to depict them in the way that would most clearly show whatever structure existed. For girth four (that is, triangle-free graphs... the subject of the paper, in fact) it seemed clear that the answer was always the bipartite Turán graph of the appropriate number of vertices. You take your computers, you divide them into two equal groups (as evenly as possible, if there is an odd number), and each computer on one side talks to all the computers on the other side. But we didn't know why. More on that later.

For girth greater than four, well, I looked at the graphs and they were weird. Some were well-known graphs; the notorious Petersen Graph was on the list; some were graphs I'd never seen before. Most were cubic, or close, but some weren't. Bipartite graphs, non-bipartite graphs, with and without almost any property you'd care to think of. There was no obvious pattern.

At this point Steph and I recruited Ben and Dave to help with the problem, and the four of us had an entertaining series of meetings over a period of several months. We gradually restricted the problem. Instead of trying to answer it for all girths right at the start, we decided to look at the girth four case first. As a warm-up, so to speak. Because that looked easy.

One reason for it to be easy was that the Turán graphs are already well-known to maximize the number of edges, among triangle-free graphs of a specific number of vertices. It makes sense that you need a lot of edges to have a lot of cycles and vice versa, so there should be a way to prove (now that we know from the computer search that it seems to be true) that maximizing one of those numbers has the effect of maximizing the other. But we just couldn't find the connection.

The closest thing was what eventually became Lemma 3.5 in the final paper: it says that there's an upper limit on the number of cycles in a graph, subject to certain conditions, depending on number of edges. And we already know, this part really is easy, exactly how many cycles there are in the Turán graphs. So that indirectly gives a minimum number of edges that any graph must have in order to beat the Turán graph. We can rule out entire classes of graphs that way, including some of the ones that are most troublesome. But the "certain conditions" are a problem, and we didn't even have that Lemma at the start of the project.

Another thing we tried was to look for an inductive proof. It feels like you should be able to say, "If you had a graph which did not have the most cycles, then you could make some small change to it and cause it to have more cycles. Therefore the graph that has the most cycles must be one to which you cannot make such a change. And the only graph like that, is this one here." That's often the natural way to solve this kind of problem.

It doesn't work, basically because of the graph below in particular. (From Figure A.6 in the paper.)

[C_5(2)]

What's special about that graph is that it's a local maximum. It has 593 cycles, which seems like a lot. And any small change you might reasonably make to it, such as removing a vertex and reinserting it elsewhere, will reduce the number of cycles. So the sort of inductive proof you'd try to construct, if you could make it work, would end up "proving" that this graph is cycle-maximal. But it isn't. The corresponding Turán graph actually contains 3940 cycles. So the usual kind of inductive proof just doesn't work. That kept us - three professors and me, whatever I am - stymied for weeks.

We restricted the scope of the problem further, by limiting it to only regular graphs. At this point it's not really about computers in a network anymore, but if you go back to the computers in a network idea, saying the graph is regular amounts to saying all the computers have the same number of neighbours. It creates a sort of symmetry in the graphs. Because we really wanted to be able to prove anything sensible about cycle-maximal graphs, it seemed like proving something about a restricted class of them could be worthwhile, and regular graphs seemed like a class about which we might reasonably be able to prove something.

There is an infinite kind of landscape of all the regular graphs that might possibly seem like they could be solutions to our problem. Here's a picture of the interesting part of that landscape. (This is Figure 4 in the final paper.) I love this picture because it lays out, visually, all the connections between many different pieces of math that would otherwise be hard to relate to each other. If we just think about cutting out sections of the diagram, it's relatively easy to know which cases have and haven't been proven.

[Figure 4]

So: the hypothesis is that for all n (the number of vertices), the bipartite Turán graph with n vertices contains more cycles than any other n-vertex regular triangle-free graph. For even n, the Turán graph is itself regular, but we also want to prove that for odd n it beats all the regular graphs. The picture above represents the landscape of regular graphs, with number of vertices along the horizontal axis and number of edges (scaled into a proportion of the maximum) along the vertical axis.

To test this hypothesis we had to prove statements about graphs (or look in the library for things other people had already proved) that had the effect of carving out chunks of that landscape until nothing was left.

Anything above the 2/5 horizontal line at the top is ruled out by a well-known result of Andrásfai - published in 1964, in Hungary (the country), in German (the language), under the splendid title "Graphentheoretische extremalprobleme." Most people actually cite a later paper by Andrásfai, Erdős, and Sós, because the later one was in English. Anyway, any triangle-free graphs that would fall above that 2/5 line have to be bipartite, and if they are bipartite, then it's easy to prove that they would have to be the bipartite Turán graph in particular in order to be cycle-maximal, so there can't be any counterexamples that would disprove our hypothesis, above the 2/5 line.

The curve labelled "(5) and (10)" in the figure corresponds to Lemma 3.5. Graphs below that curve do not contain enough edges to beat the corresponding Turán graph. And the really important thing about it is that as you push n to larger and larger values, extending the diagram to the right, the curve flattens out and approaches a straight horizontal line, but the line it's approaching is at a y-coordinate of about 0.41. Just a tiny bit more than 2/5, which is 0.40. So we have: everything above the line at 0.40 is ruled out; everything below the curve is ruled out; and to the right of some large n (I think it's about a thousand; the exact number doesn't matter, as we'll see later), the curve crosses above the line and remains there. Consequence: we have cut off all of this infinite landscape except a finite region. Now there is only a finite, though unimaginably large, number of remaining regular graphs that could possibly be counterexamples to our hypothesis.

If we have to look at regular graphs of 1000 vertices, we're dead. There isn't enough computer power in the universe. Even n=20 would be way too much to see them all, and each additional point on n drastically increases the amount of work. It is roughly exponential in the square of n. So although reducing the task to "finite" size is interesting mathematically, that doesn't mean we can solve it yet.

The curve labelled "(5) and (12)" corresponds to Lemma 3.7. It's another bound on number of cycles in relation to the number of edges, but it's based on statements we can make about "homomorphisms" of the graphs. It'd be not wholly inaccurate to say that we can show that inside each of these big graphs, there is a little graph struggling to get out, and we can estimate how many cycles are in the big graph by making statements about the little graph. Anyway this gives us a curve that rules out everything above it - nicely cutting off the hard part of the finite region remaining after the previous step.

Other things we know about homomorphisms, mostly descending from an important theorem proved by Jin in 1993, allow us to prove that there are actually only a few graphs that could possibly be counterexamples in the range between the 10/29 and 2/5 lines on the figure. (Including those sitting exactly on the 2/5 line, but not those on the 10/29 line.) Any such graphs, if they haven't been ruled out already, have to be the ones indicated by the little circles. And the number of those circles is not only finite, but pretty small. It is reasonable to hope to try them all.

Some of those graphs are still large enough to give ECCHI trouble, however. They have up to about 70 vertices, and they are even harder to count, for their size, than similar-sized hypercubes. So we had to come up with a way to rule out a few more. The zigzag labelled "(4) and (9)" represents tighter (but not closed-form) versions of the Lemma 3.5 results, and it cuts off most of the little circles, again including all the hardest ones. So all that remains is the little circles above the zigzag, and a little bit of uncharted territory at the lower left, on and below the 10/29 line (because remember, the little-circles proof only applies above that line).

Almost all the remaining little circles can be ruled out by computer calculation with an algorithm based on the matrix permanent, described in Section 6 of the paper. That was sort of a backtrack - the plan had been to use ECCHI, but we needed the matrix-permanent thing anyway for the next section of the paper, written later, and it ended up being more natural to use it here too. There are a lot more graphs (428 of them) in the "uncharted territory" below the 10/29 line, but they are small enough to be easy to deal with, and the matrix-permanent thing eliminates those, too. The only graph remaining was the one represented by the little circle at the very upper left, namely C5, which is a graph consisting of just a single cycle of five vertices. That has fewer cycles (one) than the corresponding Turán graph (which has three), so it doesn't violate the hypothesis.

And that's it, on regular triangle-free graphs. There is another whole section to the paper, dealing with "near-regular" graphs. In those, there's a little more flexibility on number of neighbours: some vertices have one more neighbour than others. The general way the proof works for that is more or less the same as for regular graphs, but it's messier and requires more computer time. I'm not going to go through it here. Although in some sense it's bigger because it covers a larger class of graphs, it feels less exciting because there's less new material in it.

With academic publishing, actually doing the research is only the beginning of the adventure. Then you have to, well, publish it. After some discussion, we decided to take this one directly to the journal level instead of trying to present it at a conference first. We sent it in to Discrete Mathematics in October of 2013, and after several months it came back with basically favorable peer reviews.

We'd been somewhat afraid that the reviewers would find some much easier way of doing it, and make us look silly. For instance, a simple induction, or maybe a clever application of the Szemerédi Regularity Lemma that would bring the whole works down to a page or so of startling Hungarian probability bounds. That didn't happen, though. One of the reviewers found what turned out to actually be a fairly serious mistake in what became Lemma 3.5 (you can see it if you dig back through the earlier version on arXiv and compare it to the new one), so there was a bit of a flurry of activity after the reviews came back, trying to figure out exactly what was wrong and whether it could be fixed. Fortunately, it could be, and the final result was unchanged. After the new proof was written up, I re-ran all the computational experiments with the fixed version just to make sure of those, even though most of them weren't affected by the change in the underlying theory anyway.

During the review/rewrite process I gave a short talk about this work at wthe Canadian Mathematical Society conference, and had the honour of being heckled by Béla Bollobás, who literally wrote the book on extremal graph theory. He thought we ought to have used the Szemerédi Regularity Lemma on it. And what can I say, we probably should have.

But! We had four reasonably talented people working on it, we did try the SRL and couldn't get any results that way, and we did manage to get some results by other techniques even if they are messy. Now, okay: the paper is officially out there, and eventually someone - maybe even me if all these Danish pork products make me smarter - will probably manage to apply the SRL and close up the gaps, but that's perfectly okay. Now we get credit for the part that is ours, and having other people work on it afterware, increases the stature of our work.

The same can be said about the question of graphs that are not regular nor near-regular. There are a whole lot of graphs fitting that description, and to really prove the original conjecture, it'd be necessary to deal with all of them. I don't know how to do that. Maybe the SRL can cover them all in one grand slam of a proof; some of the tools built up in this paper might help, but probably can't cover the whole thing for reasons discussed in the conclusion section; and my gut feeling is that some new technique will have to be invented.

It was almost exactly one year between the initial submission and when I (as "Principal Corresponding Author") got the final proofs from the publisher's outsourcing agency in India. And they came back full of what purported to be "corrections" of the English. We had used the phrase "need only" several times, and the editors wanted to change it to "need to only" throughout. There was a sentence (see if you can find it!) that was of the form "Reducing this will increase that, as will reducing this other thing." and they wanted to change "as will reducing" to "as will be reducing." After discussion with my co-authors, I vetoed both of those changes. I suspect that such loony "corrections" didn't even originate with a human editor, however bad, at all - they look to me like what would come out of some kind of Markov-chain-based computer grammar checker. (Failing to parse the parallel construction seems like a smoking gun: machines do that, and people usually don't.) But that's another story.

One editorial change I didn't veto was the change from American spelling of "color" to Canadian/international "colour," which I prefer anyway but I thought was against their style guide.

A few mistakes did make it through. They always do. The formatting in equation display (17) is screwed up - it was supposed to be three columns of two items each, for easy comparison. While writing this up I noticed some oddities, probably errors, in the equation number references related to Figure 4. And there are a couple of weird issues with the bibliography. These are the kinds of things that probably nobody would notice except me. All in all, I think the really-final no-more-editing-allowed version is in pretty good shape. The math seems to be correct, and some people think that's sort of important.

0 comments



(optional field)
(optional field)
Answer "bonobo" here to fight spam. ここに「bonobo」を答えてください。SPAMを退治しましょう!
I reserve the right to delete or edit comments in any way and for any reason. New comments are held for a period of time before being shown to other users.