TL;DR:
pip3 install -U pywsd
Then:
>>> from pywsd.utils import lemmatize_sentence
>>> text = 'i like cars'
>>> lemmatize_sentence(text)
['i', 'like', 'car']
>>> lemmatize_sentence(text, keepWordPOS=True)
(['i', 'like', 'cars'], ['i', 'like', 'car'], ['n', 'v', 'n'])
>>> text = 'The cat likes cars'
>>> lemmatize_sentence(text, keepWordPOS=True)
(['The', 'cat', 'likes', 'cars'], ['the', 'cat', 'like', 'car'], [None, 'n', 'v', 'n'])
>>> text = 'The lazy brown fox jumps, and the cat likes cars.'
>>> lemmatize_sentence(text)
['the', 'lazy', 'brown', 'fox', 'jump', ',', 'and', 'the', 'cat', 'like', 'car', '.']
Otherwise, take a look at how the function in pywsd
:
- Tokenize the string
- Uses the POS tagger and maps to WordNet POS tagset
- Attempts to stem
- Finally calling the lemmatizer with the POS and/or stems
See https://github.com/alvations/pywsd/blob/master/pywsd/utils.py#L129
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…