Danish trf wordpiece tokenisations strips accent and lack of lemmatisation list
See original GitHub issueHow to reproduce the behaviour
The Danish transformer strip accent leading to the same wordpieces of meaningfully different words.
>>> import spacy
>>> nlp = spacy.load('da_core_news_trf')
>>> doc = nlp("sål og sal er to forskellige ord") # sole and hall is two different words
>>> doc._.trf_data.wordpieces
WordpieceBatch(strings=[['[CLS]', 'sal', 'og', 'sal', 'er', 'to', 'forskellige', 'ord', '[SEP]']], input_ids=array([[ 2, 2114, 28, 2114, 33, 385, 599, 1014, 3]]), attention_mask=array([[1, 1, 1, 1, 1, 1, 1, 1, 1]]), lengths=[9], token_type_ids=array([[0, 0, 0, 0, 0, 0, 0, 0, 0]]))
The solution is to set strip accents to false and retrain the model. As so:
[components.transformer.model.tokenizer_config]
use_fast = true
strip_accents = false
I also looked at the lemmatization list used in the Danish model which refers to a directory that contains no lemmatization list for danish. This seems like an error? Regardless the center for Danish language technology (sprogteknologi.dk) has a published list.
Info about spaCy
- spaCy version: 3.1.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.8
- Pipelines: en_core_web_lg (3.0.0), da_core_news_trf (3.1.0), da_core_news_lg (3.0.0), da_core_news_sm (3.0.0), da_core_news_md (3.0.0), en_core_web_md (3.0.0), en_core_web_sm (3.0.0)
Issue Analytics
- State:
- Created 2 years ago
- Comments:7 (7 by maintainers)
Top Results From Across the Web
1216 Open Source Natural Language Processing Software ...
A list of NLP(Natural Language Processing) tutorials ... List of Machine Learning, AI, NLP solutions for iOS. ... Lemmatization for Turkish Language.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I don’t think either of the
strip_accents
options corresponds to the preprocessing that was done when training the models (strip everything except Danish accents), so I think it might make sense to leave this as it is for now. It’s not ideal, but I hope the fine-tuned model will mostly compensate for the mismatch / ambiguity. Let’s hope there are better options in the future…This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.