Factorization Models

FactorizationModel

class recoder.nn.FactorizationModel[source]

Base class for factorization models. All subclasses should implement the following methods.

init_model(num_items=None, num_users=None)[source]

Initializes the model with the number of users and items to be represented.

Parameters:
  • num_users (int) – number of users to be represented in the model
  • num_items (int) – number of items to be represented in the model
model_params()[source]

Returns the model parameters. Mainly used when storing the model hyper-parameters (i.e hidden layers, activation..etc) in a snapshot file by recoder.model.Recoder.

Returns:Model parameters.
Return type:dict
load_model_params(model_params)[source]

Loads the model_params into the model. Mainly used when loading the model hyper-parameters (i.e hidden layers, activation..etc) from a snapshot file of the model stored by recoder.model.Recoder.

Parameters:model_params (dict) – model parameters
forward(input, input_users=None, input_items=None, target_users=None, target_items=None)[source]

Applies a forward pass of the input on the latent factor model.

Parameters:
  • input (torch.FloatTensor) – the input dense matrix of user item interactions.
  • input_users (torch.LongTensor) – the users represented in the input batch, where each user corresponds to a row in input based on their index.
  • input_items (torch.LongTensor) – the items represented in the input batch, where each items corresponds to a column in input based on their index.
  • target_users (torch.LongTensor) – the target users to predict. Typically, this is not used, but kept for consistency.
  • target_items (torch.LongTensor) – the target items to predict.

DynamicAutoencoder

class recoder.nn.DynamicAutoencoder(hidden_layers=None, activation_type='tanh', is_constrained=False, dropout_prob=0.0, noise_prob=0.0, sparse=False)[source]

An Autoencoder module that processes variable size vectors. This is particularly efficient for cases where we only want to reconstruct sub-samples of a large sparse vector and not the whole vector, i.e negative sampling.

Let F be a DynamicAutoencoder function that reconstructs vectors of size d, let X be a matrix of size Bxd where B is the batch size, and let Z be a sub-matrix of X and I be a vector of any length, such that 1 <= I[i] <= d and Z = X[:, I]. The reconstruction of Z is F(Z, I). See Examples.

Parameters:
  • hidden_layers (list) – autoencoder hidden layers sizes. only the encoder layers.
  • activation_type (str, optional) – activation function to use for hidden layers. all activations in torch.nn.functional are supported
  • is_constrained (bool, optional) – constraining model by using the encoder weights in the decoder (tying the weights).
  • dropout_prob (float, optional) – dropout probability at the bottleneck layer
  • noise_prob (float, optional) – dropout (noise) probability at the input layer
  • sparse (bool, optional) – if True, gradients w.r.t. to the embedding layers weight matrices will be sparse tensors. Currently, sparse gradients are only fully supported by torch.optim.SparseAdam.

Examples:

>>>> autoencoder = DynamicAutoencoder([500,100])
>>>> batch_size = 32
>>>> input = torch.rand(batch_size, 5)
>>>> input_items = torch.LongTensor([10, 126, 452, 29, 34])
>>>> output = autoencoder(input, input_items=input_items, target_items=input_items)
>>>> output
   0.0850  0.9490  ...   0.2430  0.5323
   0.3519  0.4816  ...   0.9483  0.2497
        ...         ⋱         ...
   0.8744  0.8194  ...   0.5755  0.2090
   0.5006  0.9532  ...   0.8333  0.4330
  [torch.FloatTensor of size 32x5]
>>>>
>>>> # predicting a different target of items
>>>> target_items = torch.LongTensor([31, 14, 95, 49, 10, 36, 239])
>>>> output = autoencoder(input, input_items=input_items, target_items=target_items)
>>>> output
   0.5446  0.5468  ...   0.9854  0.6465
   0.0564  0.1238  ...   0.5645  0.6576
        ...         ⋱         ...
   0.0498  0.6978  ...   0.8462  0.2135
   0.6540  0.5686  ...   0.6540  0.4330
  [torch.FloatTensor of size 32x7]
>>>>
>>>> # reconstructing the whole vector
>>>> input = torch.rand(batch_size, 500)
>>>> output = autoencoder(input)
>>>> output
   0.0865  0.9054  ...   0.8987  0.0456
   0.9852  0.6540  ...   0.1205  0.8488
        ...         ⋱         ...
   0.4650  0.3540  ...   0.5646  0.5605
   0.6940  0.2140  ...   0.9820  0.5405
  [torch.FloatTensor of size 32x500]
init_model(num_items=None, num_users=None)[source]

Initializes the model with the number of users and items to be represented.

Parameters:
  • num_users (int) – number of users to be represented in the model
  • num_items (int) – number of items to be represented in the model
model_params()[source]

Returns the model parameters. Mainly used when storing the model hyper-parameters (i.e hidden layers, activation..etc) in a snapshot file by recoder.model.Recoder.

Returns:Model parameters.
Return type:dict
load_model_params(model_params)[source]

Loads the model_params into the model. Mainly used when loading the model hyper-parameters (i.e hidden layers, activation..etc) from a snapshot file of the model stored by recoder.model.Recoder.

Parameters:model_params (dict) – model parameters
forward(input, input_users=None, input_items=None, target_users=None, target_items=None)[source]

Applies a forward pass of the input on the latent factor model.

Parameters:
  • input (torch.FloatTensor) – the input dense matrix of user item interactions.
  • input_users (torch.LongTensor) – the users represented in the input batch, where each user corresponds to a row in input based on their index.
  • input_items (torch.LongTensor) – the items represented in the input batch, where each items corresponds to a column in input based on their index.
  • target_users (torch.LongTensor) – the target users to predict. Typically, this is not used, but kept for consistency.
  • target_items (torch.LongTensor) – the target items to predict.

MatrixFactorization

class recoder.nn.MatrixFactorization(embedding_size, activation_type='none', dropout_prob=0, sparse=False)[source]

Defines a Matrix Factorization model for collaborative filtering. This is particularly efficient for cases where we only want to reconstruct sub-samples of a large sparse vector and not the whole vector, i.e negative sampling.

Parameters:
  • embedding_size (int) – embedding size (rank) of the latent factors of users and items
  • activation_type (str, optional) – activation function to be applied on the user embedding. all activations in torch.nn.functional are supported.
  • dropout_prob (float, optional) – dropout probability to be applied on the user embedding
  • sparse (bool, optional) – if True, gradients w.r.t. to the embedding layers weight matrices will be sparse tensors. Currently, sparse gradients are only fully supported by torch.optim.SparseAdam.
init_model(num_items=None, num_users=None)[source]

Initializes the model with the number of users and items to be represented.

Parameters:
  • num_users (int) – number of users to be represented in the model
  • num_items (int) – number of items to be represented in the model
model_params()[source]

Returns the model parameters. Mainly used when storing the model hyper-parameters (i.e hidden layers, activation..etc) in a snapshot file by recoder.model.Recoder.

Returns:Model parameters.
Return type:dict
load_model_params(model_params)[source]

Loads the model_params into the model. Mainly used when loading the model hyper-parameters (i.e hidden layers, activation..etc) from a snapshot file of the model stored by recoder.model.Recoder.

Parameters:model_params (dict) – model parameters
forward(input, input_users=None, input_items=None, target_users=None, target_items=None)[source]

Applies a forward pass of the input on the latent factor model.

Parameters:
  • input (torch.FloatTensor) – the input dense matrix of user item interactions.
  • input_users (torch.LongTensor) – the users represented in the input batch, where each user corresponds to a row in input based on their index.
  • input_items (torch.LongTensor) – the items represented in the input batch, where each items corresponds to a column in input based on their index.
  • target_users (torch.LongTensor) – the target users to predict. Typically, this is not used, but kept for consistency.
  • target_items (torch.LongTensor) – the target items to predict.