Hacker News new | past | comments | ask | show | jobs | submit login
An Introduction to Geometric Algebra (bitworking.org)
230 points by ingve on Dec 22, 2016 | hide | past | favorite | 36 comments



I LOVE Clifford algebra. It has been a gateway for me into multilinear algebra and abstract algebra, among other topics.

If a computer were able to do Clifford multiplication in a single process, it'd be able to do some NP tasks in polynomial time. Clifford algebras are a sweet intersection of combinatoric and geometric thinking.

Edit: didn't even read the article, just upvoted because of the topic.


Which NP tasks? ("Some" NP tasks are already possible in polynomial time)


Seconding the question because I want to see the encodings.

Intuitively this makes sense: in the GA formalism a "single" term requires 2^n fields (where n is the spatial dimension), and "simple" operations like addition and multiplication thus require 2^n (or more) operations to evaluate.

If you could instead somehow do those operations in O(1) time you would clearly pick up a rather nice speedup, but again I'm curious what the problems and encodings (as GA) actually look like.


Actually easy to find, for example here: https://members.loria.fr/RSchott/staceyredujanv08.pdf .


Yup, that's what I was thinking of.


This paper mentions the Hamiltonian cycle problem for graphs: https://members.loria.fr/RSchott/staceyredujanv08.pdf


Is this a genuine question, or are you trying to be blasé about employing the Socratic Method on HN?


I love to see more discussion of geometric algebra, but I have a few disagreements with this introduction:

– Starting with standard linear algebra (vector spaces, matrices, etc.) leads to confusion. These ideas are best developed within the framework of geometric algebra, where their features can be easily proved / reasoned about, but only after the basic features of geometric algebra have been presented first. After a bit of exposition, the relationship to more commonly used mathematical formalisms can be explained. Same story for trigonometry, barycentric coordinates, complex numbers, quaternions, projective geometry, differential forms, Pauli and Dirac matrices, Lie theory, representation theory, lattice theory, etc.)

– The geometric product is the basic operation. Inner (“dot”) and outer (“wedge”) products of pairs of vectors are derived products. Deriving other subsidiary products from the basic geometric product for various niche purposes and learning to reason about their various features or discussing how you might use one or another to solve particular problems in various ways is a good exercise.

If a and b are vectors, then by definition ab = ½(abba) = –ba. (We’ve intentionally cancelled out the commutative parts of the product, leaving us something purely anti-commutative.)

For two vectors, by definition a·b = ½(ab + ba) = b·a. (We’ve intentionally cancelled out the anti-commutative parts of the product, leaving something purely commutative.)

That the geometric product of two vectors can be thus split apart by grade (i.e. ab = ab + a·b) is a consequence derived from the basic axioms. It so happens that the purely anti-commuting part of the geometric product of two vectors has the dimension/orientation of the plane they lie in, whereas the purely commuting part has the dimension of a scalar. This just falls right out of the basic definition of the geometric product.

– The initial presentation should not focus on coordinates. Coordinates are an arbitrary choice, not inherent to the geometry.

Ideally lessons should start with a vector space (or affine space) where we assume we know how to add or multiply two vectors (probably the most useful way to start building intuition is to just show vectors as arrows in a picture rather than written down as some explicit expression), and then develop a full set of multivectors on top. Figuring out how to stick some arbitrary coordinate system into place for making concrete computations is then straightforward to understand. But if students get used to working with coordinates, they will generally have great difficulty ditching them to think more abstractly. David Hestenes calls this a “math virus”: http://geocalc.clas.asu.edu/pdf/MathViruses.pdf

If you start with an affine space (e.g. the Euclidean affine plane), you can pick an arbitrary origin point “zero”, and then treat the other points in your space as zero plus some vector displacement, and get to work with the displacement vectors in the derived vector space. You can then pick an arbitrary direction and magnitude to be your e₁ direction, and an arbitrary orientation of the plane (I) or an arbitrary orthogonal e₂ vector of the same magnitude. This lets you keep always in mind that your chosen origin and basis were arbitrary, and can be changed at will to suit the problem.

– This presentation doesn’t discuss much of any of the geometrical relationships between vectors, bivectors, lines, planes, etc., or how you might go about using geometric algebra to solve practical geometry (or Newtonian mechanics, or computer graphics, or whatever) problems.

The difficult part about geometric algebra is not the basic definitions, but rather learning to think with it fluently, and practicing combining its abstractions. I personally don’t feel like I have reached a level of real fluency, but only basic familiarity. Mainly because I habitually reach for more standard tools which I know better from using them for years of schoolwork.

The “sunset geometry” blog post which didn’t get any discussion here a week or two ago was IMO a better introduction. https://news.ycombinator.com/item?id=13162368 http://www.shapeoperator.com/2016/12/12/sunset-geometry/


Seems like all 'mistakes' you mention were intentional.

The article states at the beginning he want's to start with a concrete example, instead of diving into abstractions and formalisms. You respond by complaining this isn't abstracted enough.

I get your point, because this might set up some wrong intuitions. On the other hand though, the more abstract approach might be so dense as to resist any intuition at all. In my experience, wrong intuition is a better starting point for learning math that no intuition at all.


I didn’t say these were “mistakes”, only that I disagree with the pedagogical choices made.

I don’t think we should start with something entirely abstract. Indeed, I want to start with something much more concrete: physical pictures of vectors as arrows, and a collection of concrete geometry problems to solve. If we later want to layer in a coordinate system (e.g. because it helps us solve one of our listed problems), we can draw several possible sets of coordinates over the top of our pictures, and think about how the computations in each coordinate system change while the underlying vector arithmetic stays the same.

Coordinates are an abstraction layered on top of vector spaces, and it is difficult for many students to understand the difference between the system or model being studied and the arbitrary details of one representation. I think focusing on coordinates narrows students viewpoint and leads them to a more rote/computational and less conceptual understanding. To the point that e.g. many undergraduate physics majors have difficulty with tests of basic competence about vector concepts, despite needing to solve a variety of hard computational problems.

Indeed, this specific failing of conventional formalisms is one of the primary reasons to switch to geometric algebra as a modeling tool in the first place.

It’s like trying to teach someone to play chess by looking at “algebraic notation” instead of starting by looking at a board and moving physical pieces around, or teaching someone phonology of a new language by looking at prose descriptions in their original language and static diagrams of a mouth, instead of starting by just directly listening to the sounds.

New languages, systems, abstractions, etc. are best learned through (at least some amount of) immersion in the mode of thinking/communication of the language, not purely through formal analysis using external tools. The presentation in the OP leads me to believe that the author (like myself, to be honest) is not actually fluent with thinking about geometric algebra concepts / tools, but is treating them like a novice second-language learner.

(Sorry if that sounds harsh. I’d love to buy the blog post author a coffee or beer if he’s ever in San Francisco.)


It felt to me like the author was aiming at physics / comp-sci students who are already familiar with linear algebra, and may have less of an aptitude for working from first principles.

Though I have to admit not to know geometric algebra, from both you and the article, I gather it was created as an improvement on linear algebra. For those less innately interested in maths, starting from such a position tends to work much better than starting from scratch.

I feel like there are 2 methods aimed at different people. For the 'real' mathematicians, the approach you advocate is better suited. For those who use maths only as a tool, and do not so much appreciate the innate properties of maths, the easier to follow (but less informative) method of the author might be better.

Sadly, there is no single 'best method' for teaching a thing.


> If a computer were able to do Clifford multiplication in a single process

What do you mean by "in a single process"?


What have you found to be a good introduction?



I enjoyed Alan MacDonald's book when I was first getting the hang of things. Never read much of David Hestenes, although he has a lot of good literature.

Lately I've been more interested in the deeper workings of Clifford algebras, which is their intimate connection with quadratic forms. When I get to a computer I'll try and post some resources.


There's a library called versor that might be handy for folks that want to use geometric algebra in programs.

https://github.com/wolftype/versor

https://github.com/weshoke/versor.js/


Perl 6 users should also check out my library inspired from versor.

https://github.com/grondilu/clifford

It's not as well optimized but it's much easier to use.


> Similarly, to rotate vectors you have to create matrices, which don't exist in ℝ2, and apply them to vectors through matrix multiplication.

You don't "need" matrices, they're just one way to represent linear functions that map vectors, and the complex field is another (for R^2).

Coincidentally, most of the concrete implementation described for operations in G^2 here are applications of Euler's formula.

This article drops off in extending the lessons from R^2 to R^3. Comparatively, matrices excel for comprehensibility in this regard -- if you know how to apply matrices in R^N, you can apply the same knowledge to R^N+1.


It's a reasonable nit, but that section is just background and rotors are introduced farther down. Rotors are unit elements in the even-numbered subspace of Cl_2(R), and that subspace can be shown to be (ring) isomorphic to the complex numbers.

Personally I find the formulations for geometric algebra to be simpler than the formulations for equivalent concepts using matrices and linear algebra, in the situations where they exist.


> You don't "need" matrices, they're just one way to represent linear functions

Is there another way?


As vectors, as algebraic formulae, as a graph, etc


Please don't conflate sets with algebras. Strictly speaking the tuple (R^2, +, ·) (where · is multiplication by a scalar) is a linear algebra over R, R^2 is just a set.

Now I know mathematicians often refer to spaces by their underlying set, but the notation G^2 doesn't really make sense (for starters R^2 = R x R, what is G?). Calling it the Geometric Algebra over R^2 would be fine. You can even refer to it as just R^2, as long it's clear that you're using the geometric product (and what metric you're using).


Many people denote by R the field of reals, not just its underlying set. IMO this is a very sensible thing to do, for the following reasons:

(0) When you construct the reals, you don't construct just a set - you construct a more elaborate structure into which the rationals, equipped with their ordered field structure, can be embedded in a structure-preserving way.

(1) When you work with the reals, you again don't just use the underlying set. If you only need the underlying set, why couldn't you just take the power set of the naturals? It's much simpler to construct, and qua set (that is, taking only cardinalities into consideration), it's just as good.

And, of course, every field can be regarded as a vector space over itself in an obvious way. So the notation R^2 is completely justified.

OTOH, I completely agree with your complaint about the notation G^2.


G is a 4th dimensional vector space over R so it's underlying set is not R^2. The geometric product of two vectors is not an element of R^2 - it's a real number + a bivector. We are really working in the exterior algebra.


In the first paragraph, do you mean vector space over R? I wouldn't say "algebra" unless I were specifying a way to multiply two vectors.

I agree that you've found two notational abuses but I think they're very common; as an example of the second, the sphere S² is not the same as the torus S¹xS¹.


You're right, it's not really the product of some set "G" with itself but hooooo boy a programming/hacker forum is probably the wrong place to be defending the syntax of mathematical notation


If anything, programmers should (and often do!) complain that mathematical notation and terminology isn't defined rigorously enough, and that the main justification for many design decisions is “hysterical raisins”. By comparison, even the wackiest programming languages are the parangon of consistency - after all, they have to be interpreted by machines that can only follow rules.


I'm reading a disagreement in your phrasing, but I think we're saying the same thing: Math notation is the wild west, dressed in the veneer of consistency but indignant at any untoward implication.

Programming languages are praised by their elegance and economy, even when they're wacky. Any theory (programmatic, mathematical, scientific) benefits from those qualities, because theories like to be axiomatic and terse. But any primer or intro or lesson is made MUCH better through repetition and redundancy. Computers don't need the rule of three, people do, and there's an mismatch.


Yes, there's a disagreement. contravariant's complaint about the notation G^2 is completely justified.


I would object too, but I don't see how that could be confusing moreso than any whiteboard "you get what I mean" pseudocode.

The reader's walking in with knowledge of the product-based construction of N-dimensional space, and we readily admit that "R^2" is (a set literally, but) taken to mean the space equipped with whatever operations appropriate for whatever algebra.

I'm being kinda sophist on this. "G^2" isn't meaningful. I agree. But given the convention of naming a set with the implication of whatever topology/metric/norm, the notation is already under harsh torture, right..?


In my reply to contravariant, I explain why I don't think R^2 constitutes abuse of notation: https://news.ycombinator.com/item?id=13241874 .


> but the notation G^2 doesn't really make sense (for starters R^2 = R x R, what is G?)

A superscript can refer to other things besides exponentiation, and using "R^2" to refer to the vector space is a perfectly valid thing to do.


Read it first as algebraic geometry, was confused that I understood some of the words.


It's hard to understand why he complains about vector operations that require stepping outside of R^2 but then introduces a 4 dimensional vector space. The justification needs to be reworded in my opinion.


It really doesn't help that in 2D you can't do much that you can't do with complex numbers.

I'm not a mathematician and I really struggle with just accepting the mathematical definition. I've found it helpful to think of the geometric algebra as an extension of the R^2 to contain scalars and geometrically well-defined imaginary numbers. In R^2, the dot product maps to R. You can't store this in R^2. You can store this in the corresponding geometric algebra.

For R^3, the corresponding geometric algebra forms an 8-dimensional space: 1 for the scalars, 3 for the vectors, 3 for the bi-vectors (planes formed by the 3 basis vectors) and 1 as the geometrically meaningful imaginary part. And yet in 3D and beyond, geometric algebra provides a really elegant framework for rotations in a way that is consistent, self-contained, and readily extends to higher dimensions.


Thanks for the clear and concise exposition. It was quite useful.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: