Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wait, how does this work? I'm really rusty, what would the f, A and b correspond to here?


It's the common way to solve a linear system in octave, matlab and julia.

You have an invertible square matrix A, a vector b of the same dimension, and you want to find a vector x such that "A*x=b". Then you write "x=A\b", which is like "x=A^(-1)*b" but does not get to compute the full inverse matrix (which is useless).


Sorry, yes I know about the syntax. I'm just struggling with what exactly you would be plugging in. Like with respect to any of the example problems given, what would A and x and B be?


You create matrices that represent the operator. For instance, suppose you had a 1D function and were interested in the evenly spaced points (x1, x2, x3, x4, x5), with the function values at these points being

  y1
  y2
  y3
  y4
  y5
If the differential equation had a first derivative in it, you'd construct something like this, multiplied by some constant (e.g. 1/(2*dx) for an unscaled derivative):

  ? ? ? ? ?
  -1 0 1 0 0
  0 -1 0 1 0
  0 0 -1 0 1
  ? ? ? ? ?
So the derivative at each element is defined by the difference of the next and previous elements. Multiplying the column of function values by this gives you the derivatives. This doesn't work for the first and last element, and in fact you'll usually modify these rows depending on what boundary condition is needed, so I've just left them filled in with "?".

For a solver, you don't know what the y values actually are, so you construct a column that corresponds to the right side of the differential equation. For instance, if the equation was something like the trivial dy/dx = c and you were using the operator above the column would be

  ?
  c
  c
  c
  ?
with the first and last values to be filled in based on the boundary conditions. You then left matrix divide that by the operator matrix (i.e. multiply it by the inverse of the operator matrix). That gives the solution to the equation.

This is just a simple example and in practice the matrices will be larger and more complex.


x is f in the article

A is the five-point laplacian (a symmetric matrix with five non-zero diagonals), as implemented in the "step" function in the article, let's call it L

b is the datum h

If you set-up the sparse matrix L and the vector b (with the same code that performs the iterations in the article), then you can solve Poisson equation "L*f=h" by doing "f=L\h".


Don’t forget APL. I can’t say for sure but I imagine this \ operator came from there.


It's not some obscure symbol; it's just division. It just so happens that we typically "divide" matrices from the left when solving equations like this (and matrix "division" isn't commutative) so instead of `a/b` it's `b\a`.


Yes I know. It’s a solve operator…it’s equivalent to division only in the infinite precision world.

The \ operator differs from the / operator in that it doesn’t compute an inverse … it solves the system of equations. Solver algorithms are more numerically stable ( in that you’re much less likely to have large errors due to wacky input data).


Yes, I know. :)

I'd just say that solving the system of equations is the best way to divide by a matrix — that's why I put air quotes around "divide" above. In Julia, right-dividing matrices (with `A/B`) actually does the smart adjoint-commuting thing to left-divide it and do the solve (with `(B'\A')'`).


    using LinearAlgebra, SparseArrays
    N = 10
    D1 = sparse(Tridiagonal(ones(N-1),-2ones(N),ones(N-1)))
    Id = sparse(I,N,N)
    A = kron(D1,Id) + kron(Id,D1)
You just use the Kronecker product to build the matrix in the different directions. This, along with the relationship to convolutional neural networks, is described in the MIT 18.337 course notes: https://mitmath.github.io/18337/lecture14/pdes_and_convoluti...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: