question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Sentence tokenization in spaCy?

See original GitHub issue

While trying to do sentence tokenization in spaCy, I ran into the following problem while trying to tokenize sentences:

from __future__ import unicode_literals, print_function
from spacy.en import English
nlp = English()
doc = nlp('Hello, world. Here are two sentences.')
sentence = doc.sents.next()

It was unclear to me how to get the text of the sentence object. I tried using dir() to find a method that would allow this and was unsuccessful. Any code that I have found from others trying to do sentence tokenization doesn’t seem to function properly.

Issue Analytics

  • State:closed
  • Created 8 years ago
  • Comments:12 (4 by maintainers)

github_iconTop GitHub Comments

36reactions
honnibalcommented, Sep 9, 2015

Argh.

The last snippet I sent you was wrong — sorry, it’s late and I was hasty.

from __future__ import unicode_literals, print_function
from spacy.en import English

raw_text = 'Hello, world. Here are two sentences.'
nlp = English()
doc = nlp(raw_text)
sentences = [sent.string.strip() for sent in doc.sents]

The Doc object has an attribute, sents, which gives you Span objects for the sentences.

9reactions
honnibalcommented, Sep 9, 2015
from __future__ import unicode_literals, print_function
from spacy.en import English

raw_text = 'Hello, world. Here are two sentences.'
nlp = English()
doc = nlp(raw_text)
sentences = [sent.string.strip() for sent in doc]
Read more comments on GitHub >

github_iconTop Results From Across the Web

NLP with spaCy Tutorial: Part 2(Tokenization and Sentence ...
Simply speaking, Tokenization is the method of splitting the sentence into its tokens. Let's see how it works in spaCy:- Understanding the Code....
Read more >
How to Perform Sentence Segmentation or Sentence ...
Sentence Segmentation or Sentence Tokenization is the process of identifying different sentences among group of words. Spacy library ...
Read more >
Tokenizer · spaCy API Documentation
The tokenizer is typically created automatically when a Language subclass is initialized and it reads its settings like punctuation and special case rules...
Read more >
Tokenization Using Spacy library - GeeksforGeeks
Tokenization is the process of splitting a text or a sentence into segments, which are called tokens. It is the first step of...
Read more >
Complete Guide to Spacy Tokenizer with Examples - MLK
In Spacy, the process of tokenizing a text into segments of words and punctuation is done in various steps. It processes the text...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found