lexeme not loaded with string lookup
See original GitHub issueFrom the doc:
However, following this code:
from spacy.en import English
nlp = English()
lexeme_name = nlp.vocab[1000].orth_
#>>situation
lexeme = nlp.vocab['situation']
#>>*** TypeError: an integer is required
Should not the appropriate lexeme be returned with the string lookup argument?
Issue Analytics
- State:
- Created 7 years ago
- Comments:6 (2 by maintainers)
Top Results From Across the Web
Writing Sebesda's lexical analyzer in python. Does not work ...
I'm using the string sum + 1 / 33 as my sample input string. From what I understand, the first call to getChar()...
Read more >Lexeme · spaCy API Documentation
A Lexeme has no string context – it's a word type, as opposed to a word token. It therefore has no part-of-speech tag,...
Read more >15: 9.13. Text Search Functions and Operators - PostgreSQL
Converts an array of text strings to a tsvector . The given strings are used as lexemes as-is, without further processing. Array elements...
Read more >Token, Patterns, and Lexemes - GeeksforGeeks
int main() { // printf() sends the string inside quotation to ... as a sequence of tokens and not the series of lexemes...
Read more >tsearch2 guide - Rhodes Mill
Once you have tsearch2 working with PostgreSQL, you should be able to run the ... lexemes and vector lexemes — if they are...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Well…That might work for now. But if you do it that way, you’ll be in for no end of frustrations as you process more text.
You should work through the unicode/bytes difference in Python 2. It’s pretty important if you’re going to do NLP.
Best practices: make sure all your files have
from __future__ import unicode_literals
at the top, and always read in files usingio.open(loc, encoding='utf8')
. This will go most of the way to making things work by default.I actually figured a work around. With the OP’s code, if you change to
It probably will work.