- 2.4 探索文本语料库
2.4 探索文本语料库
在2中,我们看到了我们如何在已标注的语料库中提取匹配的特定的词性标记序列的短语。我们可以使用词块划分器更容易的做同样的工作,如下:
>>> cp = nltk.RegexpParser('CHUNK: {<V.*> <TO> <V.*>}')>>> brown = nltk.corpus.brown>>> for sent in brown.tagged_sents():... tree = cp.parse(sent)... for subtree in tree.subtrees():... if subtree.label() == 'CHUNK': print(subtree)...(CHUNK combined/VBN to/TO achieve/VB)(CHUNK continue/VB to/TO place/VB)(CHUNK serve/VB to/TO protect/VB)(CHUNK wanted/VBD to/TO wait/VB)(CHUNK allowed/VBN to/TO place/VB)(CHUNK expected/VBN to/TO become/VB)...(CHUNK seems/VBZ to/TO overtake/VB)(CHUNK want/VB to/TO buy/VB)
注意
轮到你来:将上面的例子封装在函数find_chunks()内,以一个如"CHUNK: {<V.*> <TO> <V.*>}"的词块字符串作为参数。Use it to search the corpus for several other patterns, such as four or more nouns in a row, e.g. "NOUNS: {<N.*>{4,}}"
