question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

'bias' feature in example

See original GitHub issue

I’m very new to CRF so I apologize if my issue is just ignorance…

I was going through the example and noticed that word2features() added a 'bias' to the beginning of each feature set. Does this have a purpose? It seems that since every set of features will contain that ‘bias’ string that the end result should be the same without it. (or I’m just totally not getting it) I tried looking through the docs here and the crfsuite docs and couldn’t find anything that would indicate the purpose.

Issue Analytics

  • State:open
  • Created 6 years ago
  • Reactions:5
  • Comments:6

github_iconTop GitHub Comments

2reactions
Pantamiscommented, Jan 19, 2018

'bias' is a feature that capture the proportion of a given label in the training set.

Intuitively, if you have no other feature than 'bias' in your model (so your features are just indicator functions of the current label) then the weight of the feature learned will be higher if the label appears more. When you will predict, you will just always return the label with higher weight, which is the one which appear the most during training.

In a ‘real’ CRF, it is just a way to express that some label are rare by themself and other not, so you may take count of this (for example you can imagine a language in which verbs are mostly avoided and not nouns, so you should express that with the weight of the 'bias' feature lower for verbs labels than nouns).

I hope it is clear…

0reactions
hanifabdcommented, Nov 29, 2020

I applied it for sentence boundary disambiguation, when i remove the bias, my model be more aggresive in segmenting the sentence but when i add bias = 1 it make more better, but i didnt know the reason why

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to Fix Feature Bias. Choosing a strategy requires testing…
Feature bias, which reflects measurement errors or biases in human judgment, can negatively impact fairness of machine learning models.
Read more >
Exploring Features Overview To Identify Biases
Reporting bias, for example, where one captures the frequency with which people write about actions, outcomes, or properties differently from their real-world ...
Read more >
Understanding Bias in Machine Learning Models - Arize AI
Feature engineering bias arises when a feature or set of features, such as gender, ethnic categories, or social position has a detrimental ...
Read more >
Bias in machine learning examples: Policing, banking, COVID ...
Close examination reveals some degree of human and data bias in just about every type of machine learning model and application, ...
Read more >
Fairness: Types of Bias | Machine Learning - Google Developers
EXAMPLE: A model is trained to predict future sales of a new product based on phone surveys conducted with a sample of consumers...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found