Neural Network & Learning Visualization
Learning Parameters

๐Ÿ“Š Current Step

Ready to start backpropagation

๐Ÿ’ฅ Loss Function

Mean Squared Error (MSE): L = ยฝ(y_pred - y_true)ยฒ

Current Loss: 0.00
๐ŸŒŠ Gradient Flow

Watch how gradients flow backward through the network

โš–๏ธ Weight Matrix
๐Ÿ”ข Input โ†’ Hidden Weights
๐Ÿ“Š Hidden โ†’ Output Weights

โ›“๏ธ Chain Rule in Action

The chain rule allows us to compute gradients for nested functions:

โˆ‚L/โˆ‚w = โˆ‚L/โˆ‚y ร— โˆ‚y/โˆ‚z ร— โˆ‚z/โˆ‚w

Where:

  • L = Loss function
  • y = Network output
  • z = Weighted sum before activation
  • w = Weight parameter