Bases: object
Learn low-dimensional embedding optimizing the WARP loss.
Parameters : | d : int
gamma : float
C : float
max_iters : int
validation_iters : int
batch_size : int
positive_thresh: float :
max_trials : int
|
---|
Attributes
U_ | numpy.ndarray | Row factors. |
V_ | numpy.ndarray | Column factors. |
Methods
compute_updates(train, decomposition, updates) | |
estimate_precision(decomposition, train, ...) | Compute prec@k for a sample of training rows. |
estimate_warp_loss(train, u, N) | |
fit(train[, validation]) | Learn factors from training set. |
precompute_warp_loss(num_cols) | Precompute WARP loss for each possible rank: |
sample(train, decomposition) |
Compute prec@k for a sample of training rows.
Parameters : | decomposition : WARPDecomposition
train : scipy.sparse.csr_matrix
k : int
validation : dict or int
|
---|---|
Returns : | prec : float
|
Notes
At the moment this will underestimate the precision of real recommendations because we do not exclude training cols with zero ratings from the top-k predictions evaluated.
Learn factors from training set. The dot product of the factors reconstructs the training matrix approximately, minimizing the WARP ranking loss relative to the original data.
Parameters : | train : scipy.sparse.csr_matrix
validation : dict or int
|
---|---|
Returns : | self : object
|
Precompute WARP loss for each possible rank:
L(i) = sum_{0,i}{1/(i+1)}
Bases: object
Collection of arrays to hold a batch of WARP sgd updates.
Methods
clear() | |
set_update(ix, update) |
Bases: object
Matrix embedding optimizing the WARP loss.
Parameters : | num_rows : int
num_cols : int
d : int
|
---|
Methods
apply_updates(updates, gamma, C) | |
compute_gradient_step(u, i, j, L) | Compute a gradient step from results of sampling. |
reconstruct(rows) |
Compute a gradient step from results of sampling.
Parameters : | u : int
i : int
j : int
L : int
|
---|---|
Returns : | u : int
i : int
j : int
dU : numpy.ndarray
dV_pos : numpy.ndarray
dV_neg : numpy.ndarray
|
Bases: mrec.mf.model.warp.WARP
Learn low-dimensional embedding optimizing the WARP loss.
Parameters : | d : int
gamma : float
C : float
max_iters : int
validation_iters : int
batch_size : int
positive_thresh: float :
max_trials : int
|
---|
Attributes
U_ | numpy.ndarray | Row factors. |
V_ | numpy.ndarray | Column factors. |
W_ | numpy.ndarray | Item feature factors. |
Methods
compute_updates(train, decomposition, updates) | |
estimate_precision(decomposition, train, ...) | Compute prec@k for a sample of training rows. |
estimate_warp_loss(train, u, N) | |
fit(train, X[, validation]) | Learn embedding from training set. |
precompute_warp_loss(num_cols) | Precompute WARP loss for each possible rank: |
sample(train, decomposition) |
Learn embedding from training set. A suitable dot product of the factors reconstructs the training matrix approximately, minimizing the WARP ranking loss relative to the original data.
Parameters : | train : scipy.sparse.csr_matrix
X : array_like, shape = [num_cols, num_features]
validation : dict or int
|
---|---|
Returns : | self : object
|
Bases: mrec.mf.model.warp.WARPBatchUpdate
Collection of arrays to hold a batch of sgd updates.
Methods
clear() | |
set_update(ix, update) |
Bases: mrec.mf.model.warp.WARPDecomposition
Joint matrix and feature embedding optimizing the WARP loss.
Parameters : | num_rows : int
num_cols : int
X : array_like, shape = [num_cols, num_features]
d : int
|
---|
Methods
apply_matrix_update(W, dW, gamma, C) | |
apply_updates(updates, gamma, C) | |
compute_gradient_step(u, i, j, L) | Compute a gradient step from results of sampling. |
reconstruct(rows) |
Compute a gradient step from results of sampling.
Parameters : | u : int
i : int
j : int
L : int
|
---|---|
Returns : | u : int
i : int
j : int
dU : numpy.ndarray
dV_pos : numpy.ndarray
dV_neg : numpy.ndarray
dW : numpy.ndarray
|