Regularization
See original GitHub issueI wanted to do an experiment w/ a pairwise loss where my inputs are a user and item/document embeddings. I should be able to recover BPR style matrix factorization. Here is my scoring function:
def make_score_fn():
def _score_fn(context_features, group_features, mode, params, config):
with tf.name_scope("input_layer"):
item_id = categorical_column_with_identity(
key='iid', num_buckets=params.item_buckets, default_value=0)
item_emb = embedding_column(item_id, params.K)
user_id = categorical_column_with_identity(
key="uid", num_buckets=params.user_buckets, default_value=0)
user_emb = embedding_column(user_id, params.K)
dense_user = tf.feature_column.input_layer(group_features, [user_emb])
dense_item = tf.feature_column.input_layer(group_features, [item_emb])
dot = tf.reduce_sum(tf.math.multiply(dense_user, dense_item), 1, keep_dims=True)
logits = dot
return logits
#Returns: Tensor of shape [batch_size, group_size] containing per-example scores.
return _score_fn
As you can see it’s the dot product of the embeddings. This seems to work fine as it overfits the training set. Now, I’d like to regularize the embeddings, but I can’t figure out how to do it in this scoring function. If I had access to the loss, I would do something like this:
l2=tf.contrib.layers.l2_regularizer(lambda)
l2_reg = tf.contrib.layers.apply_regularization(l2, weight_list=[dense_user, dense_item])
loss += l2_reg
Issue Analytics
- State:
- Created 4 years ago
- Comments:9 (7 by maintainers)
Top Results From Across the Web
Regularization (mathematics) - Wikipedia
In mathematics, statistics, finance, computer science, particularly in machine learning and inverse problems, regularization is a process that changes the ...
Read more >Regularization in Machine Learning | by Prashant Gupta
Regularization. This is a form of regression, that constrains/ regularizes or shrinks the coefficient estimates towards zero. In other words, ...
Read more >Regularization in Machine Learning - GeeksforGeeks
Overfitting is a phenomenon that occurs when a Machine Learning model is constraint to training set and not able to perform well on...
Read more >Regularization in Machine Learning || Simplilearn
Regularization refers to techniques that are used to calibrate machine learning models in order to minimize the adjusted loss function and ...
Read more >Regularization: Simple Definition, L1 & L2 Penalties
Regularization is a way to avoid overfitting by penalizing high-valued regression coefficients. In simple terms, it reduces parameters and shrinks ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

Hi @eggie5 , regularization is an important scenario. Instead of using one of the standard loss functions provided by the library, you can define a custom loss function, and pass this to ranking head.
An example code of how you can create a custom loss function.
Would like to reopen this. I’m trying to accomplish something similar. Take regularization losses on layers defined in the _score_fn and add then to the loss in _loss_fn.
Was wondering if anybody has succeeded in doing this, and what the recommended way of doing this is.