Surely in PyTorch/TF, the ND-Arrays of NN's must also be associated to an object reference?
> existing frameworks (PyTorch/TensorFlow) and accelerated hardware (GPUs/TPUs) which operate on neurons arranged as arrays of the same shape.
const dot = (inputs: number[], weights: number[]) => { let total = 0; const length = Math.min(inputs.length, weights.length); for (let i = 0; i < length; i++) { total += inputs[i] * weights[i]; } return total; }; activate(input?: number): number { if (!this.incoming.length) { this.output = input; } else { const inputs = this.incoming.map(connection => connection.from.output) const weights = this.incoming.map(connection => connection.weight); this.output = squash(dot(inputs, weights) + this.bias); } return this.output; }
Surely in PyTorch/TF, the ND-Arrays of NN's must also be associated to an object reference?
> existing frameworks (PyTorch/TensorFlow) and accelerated hardware (GPUs/TPUs) which operate on neurons arranged as arrays of the same shape.
This is on principle the same concept right?