Overflow

I asked a question on Math Overflow yesterday that didn’t get a very good response. Basically: it’s a dumb question, please ask a better one or ask it somewhere else. It was a good point, and I hope I can do better. I’m going to try here, and maybe post it there if it seems worth the time of the professional mathematicians when I get done.

Without further ado:

From Berge’s Hypergraphs I paraphrase this definition:

If X is a finite set, a hypergraph on that set is a family, H, of subsets of X such that an element of H is not empty, and the union of the subsets in H is X

The elements of X are called vertices, and the elements of H are called edges. There is sometimes a further constraint that if one of the sets in H is a subset of one of the other sets in H, then they are the same set. That makes the hypergraph simple, or a Sperner family. But let’s leave that out for a bit, since it has a bit of interaction with my question (or it may, I’m not sure).

I think the question I was really trying to ask, is: “What it is called if the elements of X are hypergraphs?” Are the elements of H all hypergraphs in their own right?

Another way of asking the same question (I think) is, “What is it called if the elements of H are always subsets of H?”

Maybe a third way of asking is, “What are the hypergraphs that are *not* Sperner families called?” I’m not quite sure about that, though, which makes it hard to know if the answer to the question is right.

Does the incidence structure with the elements of H on both the rows and columns have substantially different properties from the one that has edges on one and vertices on the other?

Why do I ask?

Mostly because I’m curious, partly because I’m lost, and ultimately because I’m trying to write some code that keeps trying to take this particular shape, and I’d like to read the stuff that people have already written about it.

Reading back through, I suppose it’s not ready for Math Overflow yet, but maybe after another round or two it will.

Chicken Master

“My master,” he began, but she interrupted him.

“I am not your master.”

“Yes, master.”

“Now go and put the chickens into their coop for the night.”

“But master, the chickens are wild and stupid.”

“Then you will learn a valuable lesson about mastery of yourself.”

He left, contemplating all she had said. She was not his master. Rather, she was mastered. In their conversation, the day before, she had said, “I must,” and he had stopped her to say, “There is no I.”

“Of course there is,” she said. “If I am part of everything, and everything exists, how could it be that I, then, do not exist?”

Upon returning from his hours of chasing chickens through grass and chicken shit, he showered and went to the master, who was reading.

“Do you want to know,” he asked her, “how I got the chickens into the chicken coop?”

“The chickens which were on the outside, you moved to the inside, and the chickens which were already on the inside, you let remain there.”

“Yes, master.”

“But I see there is a chicken still outside the coop.”

“Yes, master, that chicken is wild and stupid.”

“I see you have learned a valuable lesson about yourself.”

“No, master.”

“But you have. Now go, and do not try to outsmart the chicken, because that wild, stupid bird is beyond your ken.”

Hours later, he showered again and returned to the master.

“Now tell me how to outsmart a wild, stupid bird.”

“I mastered myself.”

“Bullshit, answer the question properly.”

“I stilled my body, and the chicken flew up to roost on my shoulder.”

“And now you are as wise as the branch on a tree, which caught all the ancestors of that wild, stupid bird every night for untold generations.”

That Thing Mathematicians Do

The thing the mathematicians do that I like, is to often begin with an exercise that lacks any semantic content. For instance, if I define a family, I define them as A, B, and C instead of Jimmy, Jane, and Babs. Lacks semantic content is an overstatement, but let’s just say it has less, and it’s mathematical in nature, mathematicians loving the capital letters the way they do.

A mathematical sentence of this variety might look something like this:

A B C D E F.

I say sentence, but it’s not really a sentence yet, it’s just a sequence. The graph of this sequence might look something like this:

A -> B -> C -> D -> E -> F

In order to rewrite the sentence into a syntactic graph, I have to have a bit more syntactic information about what sort of thing A, B, C, D, E, and F are. So I say that A is a thing that always connects to a B and a C. B is a thing that always connects to an A and only an A, but C is a thing that connects to an A as well as a D, an E, and an F. D, E, and F, respectively, always connect only to a C.

Now I can make a graph that has a bit more structure:

ABCDEFOne way of describing what we’ve done here is to say we have rewritten the graph, but another way is just to say we’ve taken some syntactic (semantic?) information and organized what was formerly a sequence into a graph.

Now, if you imagine we give A, D, and B all more definitions of the prior sort: A connects only to a C, D connects to a B, and B connects to a D. Now we can draw a different picture of the graph:

ABCDEF2

Think of it not as two graphs, but as two pictures of the same (hyper)graph. We’ve composed it inside out, but that’s one of the tricks I always have to use when I’m trying to think n-dimensionally. Obviously it’s a shortcut, but it gets me there while I’m retraining my brain.

Of note, I will add that I just oriented the graph arbitrarily. A doesn’t need to be at the “top” for anyone who isn’t used to graphs of this sort. The rules, presented here together

  • A connects to B and C
  • A connects to C
  • B connects to A
  • B connects to D
  • C connects to A and D and E and F
  • D connects to C
  • D connects to C and B
  • E connects to C
  • F connects to C

are the lexicon that allows us to compose this sequence (and others using those terms) into a hypergraph.

Picking which view of the hypergraph works in a particular context is the kind of work that brains are used for.

Hyperfunctions

Carrying on with the previous post, but narrowing it down to a very simple use case, I am imagining a function that takes 1 or 0 as its input. Except this function isn’t. Isn’t a function.

It isn’t a function because instead of each input tracing to one output (the definition of a function, after all) each input can produce different outputs. Let’s imagine, for the sake of simplicity that our codomain for the not-a-function is also 1 and 0.

Now let us imagine that our not-a-function has two different definitions: one produces an output of 0 and the other produces an output of 1, with the same input of 0. As such, our not-a-function is a hyperfunction that actually defines two function spaces.

Generally speaking, such a function would be dispatched to one definition or the other, but let’s not and say we did. Let’s say, instead that this is a hyperfunction or a function junction or a quantum superfunction. I’m hardly the first person to have such an idea- junction types are such a thing for values, this is a simple extension of that idea to functions.

What use could such a hyperfunction be put to?

For sure, it could be used to define an abstract syntax hypertree. Which one imagines could be usefully employed in graph rewriting, but that is hardly a start. One could imagine running the hyperfunction in parallel with itself, decomposing it into all its component functions as a way of collapsing the wave function (if I may be excused for mixing metaphors here).

One can imagine, in the words of the inimitable words of Daniel Krech that one could use it to cause a race condition and actually race.

One can imagine rewriting a bit of programming such that most functions are hyperfunctions, and that we just force the machine to pick one when they collapse.

One can imagine, one can write, one can code…

A Function is Like a Fractal

Bringing it all back to the eminently practical domain of programming, a function is generally something we think of as taking  input of some type and transforming to some particular output deterministically. But when we give the function an input which is itself a function, and make its output also a function, we have something else. Lambda Calculus, obviously, but it is also different on another level.

The computational notion  of a function derives from the mathematical one. For example, if I have a function that squares an integer in mathematics I will say that the function f(x) =x² operates on the domain of integers to the codomain of integers and has the graph (x, x²).  Cool, because my domain is all the integers and my codomain is all the squares. In fact, in computation it would be somewhat rare to consider all the integers at once, so we start with a domain of all the 32 bit integers (0-2,147,483,647 or so), and the range is decidedly less infinite than *all* the squares, it is in fact quite a short list of only 46,340 or so of them.

In practice this means we can think of the squaring function on 32 bit integers to be a way of listing these nearly 50,000 integers that are squares and for within the constraints of our code, and that each integer in the codomain is sort of the “surface” of a function in the same way that a plane is the surface of a square.

We can just as easily define a function that returns a function, thought, which does a related but somewhat different thing. Our  new power function takes an integer argument and returns a function that raises any integer to that power. Cool!

This function defines a list of lists, which are all the squares, all the cubes, all the fourth, fifth, sixth, and so on powers. And the “surface” of this function is a list rather than a number.

Imagine going another step and making a function that takes an argument that is a mathematical operation with a cardinality, which produces a power function  or a tetration function or a logarithm function, which in turn could be passed an integer which in turn would yield a list. Now our meta function’s surface is a power function. Or a tetration function.

Eventually we get to the point we are meta enough that we have actually defined function space itself, which it turns out is not quite so infinite as we once imagined, and we come to understand that the functions passes to the functions define a codomain that is something other than “the answer to life the universe and everything” – computability theory (thank you, in this particular case, to Alonzo Church) tells us that this function space cannot compute some answers for us, but we generally accept that those answers which are computable are computable this way.

I digress.

The larger point being that a function which takes functions as arguments (its domain) and produces functions as its output (codomain) and whose graph looks simply like (function, function)- this looks a lot like a fractal to me, when I squint at it the right way.

A Fractal is Like a Hypergraph

A fractal is a neat kind of shape that has a strange property: every part of the shape is composed of the entire shape. We are used to looking a fractals in a particular  way, a snapshot from a distance, but what this is is not really a fractal- it’s a picture of one surface of a fractal.

Another way to think of it is to compare it to a kind of geometry that we all understand well- the Euclidean kind. If we look at a point. One way to think of it is that a point is one “surface” of a line. And the line has infinite such surfaces. But our ability to look at only one point at a time. Amuses us to think of it differently. The line, in turn. Could be imagined as one surface of a plane. This is even easier for us to think abou, because we usually think of something with edges when we think of a plane (say, for instance, a square), and those edges are lines. An edge and a surface, in this context, are synonyms.

We can keep going in this fashion. A plane is one surface of a volume (said another way, a square is one surface of a cube). Again, easy to think of, at least untile we get to the next step, where we say a cube is just one surface of a hypercube (said the other way, volume is just one surface of hypervolume).

Phew!

What does this have to do wih graphs and fractals? Well, a fractal has this unique property that we described earlier where every part of it is made of its entire self. So if you “zoom in” you see structures like the ones you saw further out. And if you zoom out and take a picture, it just looks like an interesting shape that can, coincidentally, be found again with identical details, up on zooming in.

Put another way, a fractal is just like a line, except instead of being composed of points, it is composed of fractals.

A hypergraph has a similar property, which is that it is like a graph if the parts of a graph, instead of nodes and edges, were composed of graphs.

Amirite?

Begin

Binary is easy. Pick only one item from a list of only two. Day, night. Pick one, you’ve got binary. Which is not to say that you can’t make it more complicated than that, because we could always talk about morning and evening and twilight and eclipses and the North Pole where day lasts for six months.

Binary numbers are particularly easy, because we can use the same positional arithmetic that we use with decimal numbers, except it’s easier for the lack of 2, 3, 4, 5, 6, 7, 8, and 9.

In binary arithmetic, we pick from 0 and 1 instead of day and night, and we assign each position to increase by an order, just like with decimal. We count like this: 0, 1, 10, 11, 100, 101, 110, 111, 1000, 1001, 1010, which is read “Zero, one, two, three, four, five, six, seven, eight, nine, ten.” Gotfried Leibniz wrote a neat little essay about it three or so hundred years ago, which I cribbed from.

Letters are the same. Four hundred or so years ago, Francis Bacon came up with a “cipher” for English text that used only the letters A and B. It works out to be the same thing as with the numbers above, except written something like this: B, A, BA, BB, BAA, BAB, BBA, BAB, BAAA, BAAB, BABA, which is read, “A, B, C, D, E, F, G, H, I, J.”

It turns out that anything we can write with more than two characters, we can also write with two. The same statement is not true of one character. This makes two special. Binary just means two, and we can encode anything into binary (that’s what we did in the previous two paragraphs).

In 1856, George Boole noticed that the same kind of transformation could apply to logic, e.g. that most of what we do with it can be simplified down to a selection from a list of exactly two alternatives: True and False. Then we could use that simplification to do math on the logic, and it would turn out with predictable results.

Then, in the early part of the 20th century, Claude Shannon figured out that Boole’s logic looked a lot like electrical circuitry if you used it the right way, and wrote a nifty little Master’s Thesis in which he described how. In those days, electrical circuitry was generally used in the continuous way (e.g. take an exact reading and say what it is) but now we tend to describe it instead as either On or Off.

Which is not to say that we couldn’t make it more complicated with the electrical equivalents of twilight and sunrise, but let’s not.

A Lengthy List of the Things That Make Programming Wonderful

I have loved the programming languages that I have loved because they have justified my suffering. I have learned to use functions and objects and paradigms and punctuation and ad hoc vocabularies, and all that effort is rewarded when I’m forced to use them. But this doesn’t make them good, it just makes me a victim of Stockholm Syndrome. This list is not something I’ll focus on in the future, because I want to focus instead on where I’m going. But I do want to bid a fond farewell to all these, my friends:

  • variables named foo or i
  • variables named anything else
  • functions with similarly bad names
  • any other functions
  • objects, classes, inheritance, and dispatch
  • bad programming paradigms
  • good programming paradigms
  • programming paradigms
  • CamelCasing
  • under_scores
  • complicated namespaces
  • uncomplicated namespaces
  • namespaces
  • { curly braces }
  • parentheses()
  • $sigils and @decorators
  • excessive punctuation, in general
  • message passing
  • callbacks
  • roles, interfaces, algebraic types
  • dependencies, versions, packages, libraries
  • toolkits, frameworks, architecture
  • elegance
  • obfuscation

The list could keep going. I have loved all these wonderful friends for so long because they made me feel spiffy and smart, and I bid them adieu. I am leaving them behind, but I am taking programming with me.

Anyone want to come?