question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

I wanted to do an experiment w/ a pairwise loss where my inputs are a user and item/document embeddings. I should be able to recover BPR style matrix factorization. Here is my scoring function:

def make_score_fn():
    def _score_fn(context_features, group_features, mode, params, config):

        with tf.name_scope("input_layer"):
            item_id = categorical_column_with_identity(
                key='iid', num_buckets=params.item_buckets, default_value=0)
            item_emb = embedding_column(item_id, params.K)

            user_id = categorical_column_with_identity(
                key="uid", num_buckets=params.user_buckets, default_value=0)
            user_emb = embedding_column(user_id, params.K)

            dense_user = tf.feature_column.input_layer(group_features, [user_emb])
            dense_item = tf.feature_column.input_layer(group_features, [item_emb])
        
        dot = tf.reduce_sum(tf.math.multiply(dense_user, dense_item), 1, keep_dims=True)
        logits = dot
        return logits
        #Returns: Tensor of shape [batch_size, group_size] containing per-example scores.

    return _score_fn

As you can see it’s the dot product of the embeddings. This seems to work fine as it overfits the training set. Now, I’d like to regularize the embeddings, but I can’t figure out how to do it in this scoring function. If I had access to the loss, I would do something like this:

l2=tf.contrib.layers.l2_regularizer(lambda)
l2_reg = tf.contrib.layers.apply_regularization(l2, weight_list=[dense_user, dense_item])
loss += l2_reg

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:9 (7 by maintainers)

github_iconTop GitHub Comments

3reactions
ramakumar1729commented, Apr 17, 2019

Hi @eggie5 , regularization is an important scenario. Instead of using one of the standard loss functions provided by the library, you can define a custom loss function, and pass this to ranking head.

An example code of how you can create a custom loss function.

def _make_loss_fn():
  """Returns a loss function."""

  def _loss_fn(labels, logits, features):
    """Computes and returns the loss."""
    regularization_losses = 0
    if _use_regularizer:
      l2 = tf.contrib.layers.l2_regularizer(lambda)
      regularization_losses =  tf.contrib.layers.apply_regularization(l2, weight_list=[dense_user, dense_item])
    loss_fn = tfr.losses.make_loss_fn(tfr.losses.RankingLossKey.SOFTMAX_LOSS)
    return loss_fn(labels, logits, features) + regularization_losses

  return _loss_fn
0reactions
vishnuappcommented, Sep 10, 2019

Would like to reopen this. I’m trying to accomplish something similar. Take regularization losses on layers defined in the _score_fn and add then to the loss in _loss_fn.

Was wondering if anybody has succeeded in doing this, and what the recommended way of doing this is.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Regularization (mathematics) - Wikipedia
In mathematics, statistics, finance, computer science, particularly in machine learning and inverse problems, regularization is a process that changes the ...
Read more >
Regularization in Machine Learning | by Prashant Gupta
Regularization. This is a form of regression, that constrains/ regularizes or shrinks the coefficient estimates towards zero. In other words, ...
Read more >
Regularization in Machine Learning - GeeksforGeeks
Overfitting is a phenomenon that occurs when a Machine Learning model is constraint to training set and not able to perform well on...
Read more >
Regularization in Machine Learning || Simplilearn
Regularization refers to techniques that are used to calibrate machine learning models in order to minimize the adjusted loss function and ...
Read more >
Regularization: Simple Definition, L1 & L2 Penalties
Regularization is a way to avoid overfitting by penalizing high-valued regression coefficients. In simple terms, it reduces parameters and shrinks ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found