Bases: object
Split ratings for a user randomly into train and test groups. Only items with positive scores will be included in the test group.
| Parameters : | test_size : float 
 normalize : bool (default: False) 
 discard_zeros : bool (default: False) 
 sample_before_thresholding : bool (default: False) 
  | 
|---|
Methods
| handle(vals) | |
| num_train(vals) | |
| pos_neg_vals(vals) | |
| split(vals) | |
| stratified_split(vals) | 
Bases: object
Parses tsv input: user, item, score.
| Parameters : | thresh : float (default: 0) 
 binarize : bool (default: False) 
  | 
|---|
Methods
| parse(line) | 
Metrics to evaluate recommendations: * with hit rate, following e.g. Karypis lab SLIM and FISM papers * with prec@k and MRR
Bases: object
Bases: object
Bases: object
Compute hit rate i.e. recall@k assume a single test item.
| Parameters : | predicted : array like 
 true : array like 
 k : int 
  | 
|---|---|
| Returns : | hitrate : int 
  | 
Compute precision@k.
| Parameters : | predicted : array like 
 true : array like 
 k : int 
 ignore_missing : boolean (default: False) 
  | 
|---|---|
| Returns : | prec@k : float 
  | 
Call this to print out the metrics returned by run_evaluation().
Compute Reciprocal Rank.
| Parameters : | predicted : array like 
 true : array like 
  | 
|---|---|
| Returns : | rr : float 
  | 
Notes
We’ll under report this as our predictions are truncated.
This is the main entry point to run an evaluation.
Supply functions to retrain model, to get a new split of data on each run, to get known items from the test set, and to compute the metrics you want: - retrain(model,dataset) should retrain model - get_split() should return train_data,test_users,test_data - evaluation_func(model,users,test) should return a dict of metrics A number of suitable functions are already available in the module.