← Return to all posts

Two recent preprints and the Topos blog

10th of November, 2021

Over the past couple of months, between moving countries, teaching, and a bunch of administrative faff, I’ve managed to write two short preprints with David Spivak. I blogged about them over on the Topos blog, but I’ll just write a little crossover post here with some links (so that people don’t think that this blog is entirely dead).

If you just want links to the two articles, then here they are:

  1. “Dirichlet polynomials and entropy”. Entropy 23 (2021), 1085. DOI: 10.3390/e23081085. arXiv: 2107.04832 [cs.IT].
  2. “Deep neural networks as nested dynamical systems”. Published on SIAM News Blog (2021-12-02), available here. arXiv: 2111.01297 [cs.LG].

Back over the summer, David explained some things about Dirichlet polynomials to me, how they could be visualised as set-theoretic bundles, and what this might have to do with Shannon entropy. After a month or so of thinking, I wrote up what I understood, and then David and I fleshed it out into a pretty self-contained short story, which I think turned out very nicely! I like this kind of maths a lot, even though it’s not normally what I do; it somehow “feels” a lot like the magic of generating polynomials, or groupoid cardinality, where things just happily categorify and give some surprising numerical results!

I gave a quick summary of this article here on the Topos blog, there’s the version on the arXiv, or the published version (which is almost entirely identical, save some formatting and (annoying) renumbering of theorems) in Entropy.

A few months later, after I’d moved to Stockholm (where I’m presently based), David had another interesting thing that he offered to explain to me: how to understand interacting dynamical systems through some sort of operadic diagrammatic approach, using the formalism of polynomial functors. Something specific that sort of falls out from taking this point of view is possibly contentious or provocative: the common analogy used in neural networks is wrong, in a “not even wrong” sort of way. We’re not suggesting that neural networks “don’t work”, or anything like this, but just that the things that people have been calling neurons, should not be called neurons, because this doesn’t really type check.

We explain what we mean by this in the preprint on the arXiv (and all I wrote on the Topos blog was a copy of the introduction from this preprint, so maybe go there instead if you just want a short overview). The whole thing is pretty short (around five pages long), low on technical details, and has some nice pictures.


As a footnote, I just want to point out that, no, I have not become a tech bro, cryptocurrency kid, or deep-learning capitalist shill. These projects have been about fundamentally interesting (to me) category theory, possibly applied, but not about “how can I use gradient descent to teach this neural network to do better advertising to make me money” or such like.

Lastly, yes, I am still doing work on simplicial things in complex geometry, but the projects I’m working on in this area are slow going, and I’m trying to balance that with teaching (which has required me to finally learn all the commutative algebra that I failed to learn as a masters student myself), so I don’t have any exciting things to share on that front yet.

Anyway, as always, I don’t have commenting on this blog, but feel free to say hi on Twitter, via email, or (preferably) by training a pigeon to fly to my apartment, tap on my window, and deliver me your message on a small piece of paper wrapped around its ankle, before staying to rest at my place for a few days while I feed it and engage it in fruitful discussions on environmental activism, and then wave it on its merry way back out into the wide wild world.