Resource for neato algorithm?

Hello everyone. I was wondering if there are any references for what model neato in Graphviz is using to perform stress majorization. On a primitive look I understand it looks and analyses the graph first to identify which model the system needs. The documentation only mentions Kamada-Kawai algorithm but I believe it is not just that simple module that graphviz is using.

Glad you asked! It’s an algorithm after Jan de Leeuw.

Emden R. Gansner, Yehuda Koren, and Stephen North. 2004. Graph drawing by stress majorization. In Proceedings of the 12th international conference on Graph Drawing (GD’04). Springer-Verlag, Berlin, Heidelberg, 239–250. Graph Drawing by Stress Majorization | SpringerLink

Besides this, there’s an optional stochastic gradient solver.
J. Zheng, S. Pawar and D. Goodman, “Graph Drawing by Stochastic Gradient Descent” in IEEE Transactions on Visualization & Computer Graphics, vol. 25, no. 09, pp. 2738-2748, 2019.
doi: 10.1109/TVCG.2018.2859997

A study by Börsig Brandes and Pasztor shows that it is a good idea, but for different reasons than the ones given in the Zheng et al paper.

Katharina Börsig, Ulrik Brandes, and Barna Pasztor. 2020. Stochastic Gradient Descent Works Really Well for Stress Minimization. In Graph Drawing and Network Visualization: 28th International Symposium, GD 2020, Vancouver, BC, Canada, September 16–18, 2020, Revised Selected Papers. Springer-Verlag, Berlin, Heidelberg, 18–25. Stochastic Gradient Descent Works Really Well for Stress Minimization | SpringerLink

1 Like

Thanks a lot my friend. That is exactly what I needed. It would be interesting to go through this now. I know what I am doing this week in my free time.

Also, shouldn’t this be part of the documentation?

Yeah, we probably should add these citations to the docs. Let me update them a little now.

I just updated the docs to refer to stress majorization: But I’m less sure how to work the second citation (for the optional stochastic gradient solver) into the docs.

How do you trigger the optional stochastic gradient solver?

I have not tried it myself but from my quick search (here). There seems to be a mode=KK option that selects Gradient Descent method.

Thanks for adding reference as well.

EDIT: using -Gmode=KK in command line utility to philosophers example (which was lying with me in ly system) gave an ill conditioned error. Which has to do with matrix solver. But that means -Gmode=KK is changing the layout mode. I do not have time at the moment to check with other examples.

I’m guessing KK stands for Kamada-Kawai? And mode=“sgd” probably means stochastic gradient descent.

1 Like

Thanks you. I’ve added some more citations under the “mode” docs page:

This commit:

1 Like

Yes you are right there. And the commit looks good thanks. Future travelers wont same the same problem i faced.