I had a semester of abstract algebra as an undergrad, and I've always been surprised by how many dividends that semester has paid back over the years. The tools of higher algebra are very powerful in the right situation. This article is a great development of algebraic concepts and introduction to thinking algebraically, which is a really powerful tool to have in your toolbox.
(Closely related is the idea of invariants: properties that are preserved by particular operations or functions. Often invariants are related to some algebraic structure of the system, but can be easier to identify and support a lot of the same insights. Reasoning about invariants of systems is another great way to make progress on hard problems.)
I find that very few engineers (especially in hardware) have had exposure to this stuff. Being the only one in the room who's had an abstract algebra course means I've occasionally been able to provide a completely different line of attack on hard problems. This has been good for my career!
As an example, I once helped a friend debug a complex system that was not behaving correctly. There was an input state, a nasty sequence composed of simple operations applied to that state, and an output state, which was not behaving as expected. A bit of algebraic thinking showed that each of the simple operations preserved an invariant, so no sequence of valid operations, no matter how complex, could produce that output. This meant that debugging attention could be directed at the implementations of those simple operations, which led to finding bugs in short order. This saved a lot of work because the actual sequence came from elsewhere and would have been difficult to audit!
I wonder how much better off students would be if the majority of the formal CS classes they take are replaced with more traditional (applied) math classes for that reason. I seem to rely on ideas from those classes more than the ones from my CS classes when I write / reason about code.
What was known as "applied math" through the 20th century was actually just one application of math: calculus / differential equations and linear algebra applied to physics (in turn applied to engineering, mechanical or electrical).
But then you have computer science, which is in many ways applied "pure" math. Although if you want to separate "applied" abstract algebra from "pure" abstract algebra, then the subject you'd want to talk about is simply called automata theory.
In fact, in the book Elements of Automata Theory Jacques Sakarovitch characterizes automata theory as the "linear algebra of computer science", in more ways than one:
I suggest that automata theory is the linear algebra of computer science. I mean this in two ways. Properly speaking, automata theory IS non-commutative linear algebra, or can be viewed as such: the theory of matrices with coefficients in suitable algebras. I am more interested, however, in the figurative sense: automata theory as a basic, fundamental subject, known and used by everyone, which has formed part of the intellectual landscape for so long that it is no longer noticed. And yet, there it is, structuring it, organizing it: and knowing it allows us to orient ourselves.
Note: if you look at my other comment in this thread, I suggested that exmadscientist was talking about Floyd's invariance principle, which, if you follow my cited source there, you can see explicitly formulated as an induction proof on transitions of a state machine.
It is (unless you are...well...applying it to something). I just didn't want to limit my statement to pure math topics, since I find, e.g. optimization, to be helpful.
Yes! Although I was an econ major I did the same thing, I dipped my toes into pure math and took some real analysis and abstract algebra. I really found these challenging at times. But the ideas and modes of thinking I gained from these classes have been very rewarding since.
I relate its value in programming to the Torvalds adage "Bad programmers worry about the code. Good programmers worry about data structures and their relationships." Taking some abstract algebra really helps you think about data structures and their relationships, and to architect "good bones" in your code.
It's true but, god I wish people cared about the code more. So many systems could be made great by better naming and an understanding of semantic meaning in functions and/or classes
If I understand exmadscientist correctly, this sounds a lot like Floyd's invariant principle, in which the argument is understood as mathematical induction, where the induction is on (property-preserving) transitions of the state machine (and with the initial state as base case). See section 2 of chapter 6 in Mathematics for Computer Science: https://courses.csail.mit.edu/6.042/spring18/mcs.pdf
I've had 4 semesters of abstract algebra because I was a math major. I found even the first two semesters difficult courses. Math had always come easy to me when I was young so I didn't really understand that at the university level mathematics would require some serious study; naturally, I would have gotten a lot more out of the early algebra classes if I had put more work into them.
I had much better intuition for analysis than for algebra. The results in algebra are just more surprising than those in analysis for me. My attempts at proofs in algebra were kind of like random-walks where I would eventually stumble upon the answer and then I would have to reconstruct the logical steps to get there without all of the unnecessary circumlocution.
Years later, as a much better student, I took a graduate course in group theory and really enjoyed it because I actually spent some time studying the subject.
I really love the way that abstract algebra deals with such simple, almost meager entities: sets with just a few basic operators. The theorems about these completely abstract, virtual and not actual things, reveal properties that are foundational for all math and somehow underly our reality.
When thinking about re-learning math I always think of doing it through some Computer Algebra System. It seems to me like a way better way to learn math, allowing me to tinker with things and treat subjects as black boxes (what's the ouput of X process if I change the input to Y?) as opposed to just try to understand the static examples presented on paper.
The problem is the embarrassment of riches [1].
Compounding the problem is the awful marketing some of this products have for casual and hobby learners. At this point, I'm willing to spend a few hundred dollars for a personal edition of a product like Wolfram's Mathematica, Matlab or Maple, but I'm not sure what would be the best investment for my time and money.
Could you recommended any courses or books using a CAS to teach math concepts and applications?
Opinion: There’d be a lot more people interested in math if it were taught with greater emphasis on visualization, experimentation, and self-verification (i.e. via CAS/programming).
Speaking for myself, it vastly enhances the value of my “pen and paper” and “stare at book” math time (the old fashioned way—also valuable and necessary, but [for me] not sufficient, for deep understanding).
I really dislike the standard definition of a group - that it's a set S together with a binary operation * such that a bunch of properties hold. The definition doesn't build intuition, and doesn't motivate the introduction of the concept of "group".
For me, a group is the set of isomorphisms of a graph. If you expand the definition of "graph" a little to include continuous spaces, that is sufficient to define all groups. And yet, this nice, intuitive definition of a group might show up at the end of a course in group theory - if you're lucky.
It really is a shame how much intuition is stripped from mathematics teaching in the name of formalism.
If something is intuitive only at the end of 4-5 month long course, it is not intuitive and I would not show it during any sort of "introduction to...". Definition requiring former explanation of a whole graph theory would be something I would put at the end... of an article about graph theory. I cannot make many assumptions about my readers, and CS magna cum laude graduate is definitely not one of them (why would they need the article then?).
You don't need a massive amount of graph theory. "Rearrange the labels of the graph so that the new graph looks the same as the old one" isn't exactly arcane, is it?
My humble experience shows me that this might be too much of a digression for many people. Graphs as a visual example of e.g. dependencies or data structures are fine. But if you try to explain something with them you assumes that reader had some prior exposition to graph theory and has some intuitions already. Since I cannot assume they have these intuitions I would have to build them with other examples and the digression could be longer than a paragraphs.
Abstractions are something that works great when you worked enough with some class of specific problems, that your brain notices common parts on its own. If you try to rush it... you get another tutorial when author is enthusiastic and positive and readers feel ashamed and stupid that they "couldn't get it". I'd rather avoid that. If they get foundations, play with them and gain some confidence, they can move on to more challenging and generic definitions.
Since I'm not an expert in graph theory, especially the arcane theory of "continuous graphs", your definition is unintuitive and excessively formal to me. How is that better? Abstractions are abstract. Almost no one learns purely by abstraction. Any definition must be accompanied by examples, and always is when taught.
Grouoa are taught in the context of addition/subtraction or permutations or symmetry groups, far more intuitive that graphs.
I prefer thinking about group theory as the study of the symmetric group of order n. Then to make sense of the case of n = infinity, you introduce the group axioms.
Eventually, you can justify the study of permutations using the fact that if you take any algebraic structure and consider the group of automorphisms, you get a group.
Well, then you are missing the entire (additive) group of integers... But you are right, in that automorphisms are the most important example, and they also play a huge theoretical role (as representations of abstract groups). This is what gives the group theory its importance in math and physics.
Again, this is only what is called 'a representation.' (The problem with trying to use one as a substitute for the abstract definition is that there usually are many different representations.)
I think it would be very difficult to prove things about groups using that definition. For instance, it's not obvious to me how one would even define a subgroup in this context.
I'd guess it would be a subset of the set of isomorphisms that fixes a certain subset of the graph. But maybe I'm wrong.
I don't disagree that working straight from the graph-theoretic definition might make things harder. My complaint is that maths is taught as formal definition -> theorems. What I would like to see is intuitive definition -> formal abstraction of intuition -> theorms.
In my maths degree I spent far too long asking myself why is the definition of this thing this way?
There's a saying in math: Analysts understand what they're talking about but find it hard to proving things - while algebraists don't know what they're talking about but find it easy to prove things.
I find this to be true, at least at the introductory level. Once you get to topology you forget what you're talking about, but that's the structural ("algebraic") view of math resurfacing. Maybe you're right though - perhaps there should be a progression in algebra from concrete -> abstract the same way there's a progression from concrete (real analysis) -> abstract (topology)
As someone who have struggled to self-study abstract algebra, I do find subject lacking in 'motivation', Eg: why do we define groups, fields, in the way that they are defined? Why do we need them at all?
Also most treatments do not cover categories, which seems to have its own separate literature.
Perhaps having some geometric intuition will greatly help, viz coordinate systems in 3d Eucliden space, answering interesting questions like when is it legal to take dot product of two vectors? and if two unit vectors are parallel are they the same vector?
the lack of providing motivation is a huge problem with math courses on every level.
I feel that mathematical topics almost always benefit from positing some problem, and then "inventing" the mathematics that allow you to answer the question, followed immediately by some examples of other questions this topic can help you answer (as at least an informal justification of some of the seemingly arbitrary choices made).
Then show a similar topic you cannot quite answer and build on it with that. Eventually you will have built up the majority of the topic with motivations for each part.
Abstract algebra might be trickier to motivate than many subjects, but it still should be possible. Yet given how much trouble there seems to be in writing out motivation for more concrete topics finding a textbook that provides motivation throughout for this area seems tricky at best.
> Additive and multiplicative notations are used, when your set A can have 2 different algebras defined for it, e.g. real numbers are group for addition: (R,+,0) and multiplication: (R,∗,1). 2 different notations help us keep trace which group we are talking about at the moment.
(R,∗,1) isn't a group, (R_+,∗,1) or (R\{0},∗, 1) are, but that doesn't really work as an example of two different algebras defined on the same set. Having your operations obey the distributive property is incompatible with that kind of structure other than the zero ring (0=1).
I'm surprised how good of an investment learning algebra is. I never learned any branch of math that restructured my thinking process as radically as algebra. It even surpasses discrete math and probability. I think every undergrad should take a standard algebra course with groups, rings and fields.
(Closely related is the idea of invariants: properties that are preserved by particular operations or functions. Often invariants are related to some algebraic structure of the system, but can be easier to identify and support a lot of the same insights. Reasoning about invariants of systems is another great way to make progress on hard problems.)
I find that very few engineers (especially in hardware) have had exposure to this stuff. Being the only one in the room who's had an abstract algebra course means I've occasionally been able to provide a completely different line of attack on hard problems. This has been good for my career!
As an example, I once helped a friend debug a complex system that was not behaving correctly. There was an input state, a nasty sequence composed of simple operations applied to that state, and an output state, which was not behaving as expected. A bit of algebraic thinking showed that each of the simple operations preserved an invariant, so no sequence of valid operations, no matter how complex, could produce that output. This meant that debugging attention could be directed at the implementations of those simple operations, which led to finding bugs in short order. This saved a lot of work because the actual sequence came from elsewhere and would have been difficult to audit!