Project 2: Hopfield Networks
Hopfield Networks
Dynamic Feedback
Applications of Hopfield Networks: Networks with ODE dynamics
- Pattern Storage: Make attractors to converge to a particular pattern.
Pattern reconstruction from noise or corruption.
Network topology used for reconstruction of compressed sensing systems.
For outputs between 0 and 1 the vector y,
W = ∑k
(2yk-1) (2yk - 1)T,
(k attractors, points to converge)
- Optimization: Set up a cost surface and converge through gradient descent.
These approaches are often used for a range of optimization problems,
including the NP problems (hard problems in CS).
One reference to finding Hamiltonians (cost functions) for
Hopfield and Hopfield type problems:
.pdf,
and one previous MaxCut Hopfield implementation:
.pdf
Maxcut is one of the most popular and easiest of these problems.
- Recurrent Networks
- Generalized Winner-Take-All Networks
ODE Solutions
Every class coding example will formulate and numerically solve computation and learning algorithms from an ODE formulation.
Types of ODE solvers, order of convergence
- Euler Method: Order 1
- Runga Kutta (RK) Methods: Order 2 and higher
- Predictive RK Methods: e.g. RK45: 4th order RK computation, 5th order estimation for h.
- Stiff Differential Equations
ODE routines and computation
MATLAB Code:
[x,y]=ode45( );
Runga-Kutta 4th order/5th order predictive model
[x,y]=ode23( );
Runga-Kutta 2th order/3th order predictive model
[x,y]=ode15s( );
[x,y]=ode23s( ); Solve stiff differential equations — low order method
Python Code:
odeint( );
solve_ivp( method = 'RK45' )
References:
Texas A&M ODE MATLAB reference: pdf
MATLAB ODE overview:
pdf
MATLAB solution of ODEs examples:
pdf
Solving ODEs in Python:
pdf
Halvorsen on Python ODE solutions:
pdf
Order of ODE algorithms
- n-order method shows computation within hn+1 of the numerical accuracy.
- n-order method error bounds are within the fn+1( ).
High derivative (and increasing derivatives, e.g. exp( ))
result in wide error bounds (potentially high error)
that can lead to a lack of solution stability.
Numerics for Stiff Differential Equations
- Roughly 1-bit error every 4 additions.
- ODEs have continuous addition of variables
(smaller h, proportionally higher error)
- Smaller h, more issues with lsb errors.
- Longer running sequences result in higher numerical error
Figure 1: Numerical ODE solution convergence and numerical error
as a function of ODE numerical approximation and
as a function of numerical derivative estimation.
Numerical Solution of ODE references:
Project Requirements
This project begins the process of
simulating and computing neural network structures.
There are two parts to this project:
- Developing your own ODE computing platform
- Simulating a Hopfield Network Solution
The first part of the project directly impacts the efforts in the second project.
Your final code that you used for this project
should be in a separate file
from your writeup.
Numerical Simulation of Differential Equations
Numerical solution of ODEs is a critical aspect of exploring
Neural Network algorithms.
Simulate the following ODE system:
- x(t) = cos(2 π f t + θ ), where f = 1kHz, with phase θ
- the input dimension of x(t) is 10 or larger with different phases.
- τ is 0.2ms
- τ w'(t) = y * x (vector) - y2 w
- y(t) is a single dimension, y(t) = WT (vector) x(t)`
- Simulate this ODE for 5 seconds of physical time.
- Show w(t) and y(t)
It is assumed you will be using an ODE solver package that you will be
using throughout the semester.
Just explaining which package you are using is sufficient.
You need to show me the code that you wrote for this simulation
(e.g. not the ODE libraries that you used.
Hopfield Network Simulations
We will develop Hopfield network simulations.
The simulations are ODE computations.
You need to show me the code that you wrote for this simulation
(e.g. not the ODE libraries that you used.
Develop a Hopfield Network with fixed inputs for two different cases.
- The first case is showing a stored pattern in a Hopfield network.
The dimension of the network should be greater than 20 nodes.
Store three patterns in the network (your choice),
showing (graphically is fine) the initial desired patterns
and how you calculated your weights.
Show the convergance from initial conditions that reach each of the
three states.
- The second case is showing a Hopfield network performing
a small dimension (6-10 nodes) for the Max-Cut problem
showing the solution from three significantly different initial conditions.