Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
277 views
in Technique[技术] by (71.8m points)

n gram - Find the most frequently occuring words in a text in R

Can someone help me with how to find the most frequently used two and three words in a text using R?

My text is...

text <- c("There is a difference between the common use of the term phrase and its technical use in linguistics. In common usage, a phrase is usually a group of words with some special idiomatic meaning or other significance, such as "all rights reserved", "economical with the truth", "kick the bucket", and the like. It may be a euphemism, a saying or proverb, a fixed expression, a figure of speech, etc. In grammatical analysis, particularly in theories of syntax, a phrase is any group of words, or sometimes a single word, which plays a particular role within the grammatical structure of a sentence. It does not have to have any special meaning or significance, or even exist anywhere outside of the sentence being analyzed, but it must function there as a complete grammatical unit. For example, in the sentence Yesterday I saw an orange bird with a white neck, the words an orange bird with a white neck form what is called a noun phrase, or a determiner phrase in some theories, which functions as the object of the sentence. Theorists of syntax differ in exactly what they regard as a phrase; however, it is usually required to be a constituent of a sentence, in that it must include all the dependents of the units that it contains. This means that some expressions that may be called phrases in everyday language are not phrases in the technical sense. For example, in the sentence I can't put up with Alex, the words put up with (meaning 'tolerate') may be referred to in common language as a phrase (English expressions like this are frequently called phrasal verbs but technically they do not form a complete phrase, since they do not include Alex, which is the complement of the preposition with.")
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

The tidytext package makes this sort of thing pretty simple:

library(tidytext)
library(dplyr)

data_frame(text = text) %>% 
    unnest_tokens(word, text) %>%    # split words
    anti_join(stop_words) %>%    # take out "a", "an", "the", etc.
    count(word, sort = TRUE)    # count occurrences

# Source: local data frame [73 x 2]
# 
#           word     n
#          (chr) (int)
# 1       phrase     8
# 2     sentence     6
# 3        words     4
# 4       called     3
# 5       common     3
# 6  grammatical     3
# 7      meaning     3
# 8         alex     2
# 9         bird     2
# 10    complete     2
# ..         ...   ...

If the question is asking for counts of bigrams and trigrams, tokenizers::tokenize_ngrams is useful:

library(tokenizers)

tokenize_ngrams(text, n = 3L, n_min = 2L, simplify = TRUE) %>%    # tokenize bigrams and trigrams
    as_data_frame() %>%    # structure
    count(value, sort = TRUE)    # count

# Source: local data frame [531 x 2]
# 
#           value     n
#          (fctr) (int)
# 1        of the     5
# 2      a phrase     4
# 3  the sentence     4
# 4          as a     3
# 5        in the     3
# 6        may be     3
# 7    a complete     2
# 8   a phrase is     2
# 9    a sentence     2
# 10      a white     2
# ..          ...   ...

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...