The most basic ML task is classification
In NN lingo, this is called “association”
So lets predict “rain” (1) “no rain” (0) for PDX tomorrow
We have historical “examples” of rain and shine
Since we know the classification (training set)…
Supervised classification (association)
Wunderground lists several possible “conditions” or classes
If we wanted to predict them all
We would just make a binary classifier for each one
All classification problems can be reduced a binary classification
Sounds mysterious, like a “flux capacitor” or something…
It’s just a multiply and threshold check:
if (weights * inputs) > 0: output = 1 else: output = 0
Again, sounds mysterious… like a transcendental function
It is a transcendental function, but the word just means
Curved, smooth like the letter “C”
What Roman (English) character?
You didn’t know this was a Latin class, did you…
Most English speakers think of an “S” when they hear “Sigma”.
So the meaning has evolved to mean S-shaped.
something smooth, shaped like an “S”
so it goes from 0 to 1 in an S shape
One matrix for each mess of connections between layers
Once you've trained the NN you can disply them as heat maps
Look for structure and oportunities to "prune"
(in the first matrix of weights)
(in the last matrix of weights)
can predict the change in
Wants to nudge the
output closer to the
target: known classification for training examples
output: predicted classification your network spits out
Don’t get greedy and push all the way to the answer Because your linear sloper predictions are wrong And there may be nonlinear interactions between the weights (multiply layers)
So set the learning rate (\alpha) to somthething less than 1 the portion of the predicted nudge you want to “dial back” to
Get historical weather for Portland then …
Disadvantage #1: Slow training
Disadvantage #2: They don’t scale (unparallelizable)
At Kaggle workshop we discussed paralleling linear algebra
Scaling Workaround Limitations
But tiles must be shared/consolidated and theirs redundancy
Disadvantage #3: They overfit
What is the big O?
Rule of thumb
M * N**2
N: number of nodes M: number of layers
assert(M * N**2 < len(training_set) / 10.)
I’m serious… put this into your code. I wasted a lot of time training models for Kaggle that overfitted.
You do need to know math!
This is a virtuous cycle!
Structure you can play with (textbook)
jargon: receptive fields
jargon: weight sharing
All the rage: convolutional networks
Unconventional structure to play with
New ideas, no jargon yet, just crackpot names
Joke: “What’s the difference between a scientist and a crackpot?”
I’m a crackpot!