Wrappers and helpers for interacting with fastai.
%load_ext autoreload
%autoreload 2
%matplotlib inline

most_common_errors[source]

most_common_errors(interp)

More concise version of `most_confused`. Find the single most common
error for each true class.

Parameters
----------
interp: fastai ClassificationInterpretation

Returns
-------
dict[str, str]: Map true class to its most common incorrectly predicted
    class.

n_groups[source]

n_groups(learn)

Quickly get the number of layer groups in a fastai learner.

Parameters
----------
learn: fastai Learner
    It seems that the models themselves don't actually specify groups the
    way I've been doing in torch. Guessing this is because they've reworked
    the API so everything is Sequential and can be indexed into. This does
    make it easier to experiment with different groupings.

Returns
-------
int: 4 for default awd_lstm.

complete_sentence[source]

complete_sentence(learn, start_phrase, n_words=30, n_samples=3, temp=0.75)

Generate text from a given input. This is a decent way to get a glimpse
of how a language model is doing.

Parameters
----------
learn: fastai Learner
start_phrase: str
    The prompt (start of a sentence) that you want the model to complete.
n_words: int
    Number of words to generate after the given prompt.
n_samples: int
    Number of sample sentences to generate.
temp: float

Returns
-------
list[str]: Each item is one completed sentence.

class LRPicker[source]

LRPicker(sugg_type:('lr_min', 'lr_steep', 'avg')='lr_min', resolve:('avg', 'auto', 'manual')='avg', tol_ratio=0.1)

Take user-suggested LR into account as an "anchor" to ensure `lr_find`
doesn't choose something too far out of the ordinary.

class ULMFineTuner[source]

ULMFineTuner(learn, name_fmt='cls_stage_{}')

Fine tune language model using ULM Fit procedure. I noticed the built-in
`fine_tune` method does not unfreeze 1 layer at a time as the paper
describes - not sure if they found that to be a better practice or if it's
just simpler for an automated method.

Originally, part of the reason for building this was to also decrease the
batch size at each stage since unfreezing eats up more memory with stored
gradients. However, I decided I'd rather not have to account for changing
batch size when selecting each stage's LR (we could run lr_find before each
stage but I opted for the simpler approach).

class FastLabelEncoder[source]

FastLabelEncoder(learn)

Use a fastai learner to mimic a sklearn label encoder. This can be
useful because our standard evaluation code is often built when we are
trying out simple baseline models (e.g. logistic regression using sklearn).