最近被老板分配到文本挖掘的相关项目里面,感觉又有一种需要开辟全新道路的不祥的feel,之前对于文本挖掘相关工作不算十分了解,所以就前往data science的胜地kaggle那看了一下相关工作的流程,同时也找了一点数据练手。

项目介绍:

本次数据完全来自于kaggle中的Twitter数据,是关于世界杯的Tweets的消息,本次就根据这些短推特数据来进行文本挖掘和分析

数据介绍:

library(tidyverse)
library(tidytext)
library(visNetwork)

fifa<-read_csv(FIFA.csv)
glimpse(fifa)

————————————————————————————————————————————————
## Observations: 530,000
## Variables: 16
## $ ID <dbl> 1.013597e+18, 1.013597e+18, 1.013597e+18, 1.0...
## $ lang <chr> "en", "en", "en", "en", "en", "en", "en", "en...
## $ Date <dttm> 2018-07-02 01:35:45, 2018-07-02 01:35:44, 20...
## $ Source <chr> "Twitter for Android", "Twitter for Android",...
## $ len <int> 140, 139, 107, 142, 140, 140, 140, 138, 138, ...
## $ Orig_Tweet <chr> "RT @Squawka: Only two goalkeepers have saved...
## $ Tweet <chr> "Only two goalkeepers have saved three penalt...
## $ Likes <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
## $ RTs <int> 477, 1031, 488, 0, 477, 153, 4, 1, 2199, 5146...
## $ Hashtags <chr> "WorldCup,POR,ENG", "WorldCup", "worldcup", "...
## $ UserMentionNames <chr> "Squawka Football", "FC Barcelona,Ivan Rakiti...
## $ UserMentionID <chr> "Squawka", "FCBarcelona,ivanrakitic,HNS_CFF",...
## $ Name <chr> "Cayleb", "Febri Aditya", "??", "Frida Carril...
## $ Place <chr> "Accra", "Bogor", NA, "Zapopan, Jalisco", NA,...
## $ Followers <int> 861, 667, 65, 17, 137, 29, 208, 7, 1, 158, 34...
## $ Friends <int> 828, 686, 67, 89, 216, 283, 338, 9, 6, 245, 3...

数据表示的非常清晰,我们整个工作也用不到所有的数据列,所以我们选取了Source,Tweet,Hashtags,RTs,Name,Place。

EDA:

Tweet中出现的最多的单词是哪些:

fifa_tidy<-fifa %>% unnest_tokens(words,Tweet) %>% filter(!(words %in% stop_words$word)) %>% filter(str_detect(words,[a-z]))

fifa_tidy %>% count(words,sort = T) %>% top_n(20,wt=n) %>%
ggplot(aes(x=reorder(words,n),y=n))+geom_col(fill="#AAB7B8")+theme_bw()+
labs(y=,x=,title="Top words in tweets")+coord_flip()

看来所有的tweets里面,france,world,cup,final,congratulations都是出现最频繁的单词,和去年世界杯的现象还是很符合的。

fifa_tidy %>% filter(str_detect(Source,^Twitter for)) %>%
count(Source,words,sort = T) %>% group_by(Source) %>% top_n(10,wt=n) %>%
ggplot(aes(x=reorder(words,n),y=n,fill=Source))+geom_col()+
theme_bw()+facet_wrap(~Source,scales = "free",ncol = 2)+
labs(y=,x=,title="Top words of tweets in Each source")+coord_flip()+
theme(legend.position = "none")

这一块看的是从不同OS平台出现的单词频率有哪些,经过Twitter上网友的提醒,发现第一个『Tweet for iPhone』多了一个空格,也确实是自己一时的疏忽大意。

fifa_tidy %>% count(words,sort = T) %>% top_n(500,wt=n) %>% wordcloud2::wordcloud2()

词云这种东西是绝对不可以少的!!!

文本挖掘工作很重要的一部分就是情感判断,词性的情感往往决定了整个文本语意的走向,这里我们要将文本的情感做一个可视化分析。

fifa_tidy_sentiment<-fifa_tidy %>% rename(word=words) %>%
inner_join(get_sentiments(bing),by=word)

fifa_tidy_sentiment %>% group_by(word,sentiment) %>% summarise(total=n()) %>%
ungroup() %>% group_by(sentiment) %>% arrange(desc(total)) %>% top_n(10) %>%
ggplot(aes(x=reorder(word,total),y=total,fill=sentiment))+geom_col()+
facet_wrap(~sentiment,scales = "free")+theme_bw()+coord_flip()+
theme(legend.position = "none")

可能negative的都是克罗埃西亚球迷说了吧,LOL!!!!

fifa_tidy_sentiment %>% group_by(word,sentiment) %>% summarise(total=n()) %>%
arrange(desc(total)) %>%
reshape2::acast(word ~ sentiment, value.var = "total", fill = 0) %>%
wordcloud::comparison.cloud(colors = c("#F8766D", "#00BFC4"),max.words = 350)

词云里面有一个好玩的就是不同意义的词云进行比较。

Tweets中的情感深入:

fifa_all_sens<-fifa_tidy %>% rename(word=words) %>% inner_join(get_sentiments(nrc),by="word")

fifa_all_sens %>% count(word,sentiment,sort = T) %>% group_by(sentiment) %>% top_n(10) %>%
ggplot(aes(x=reorder(word,n),y=n,fill=sentiment))+
geom_col(show.legend = F)+ theme_bw()+facet_wrap(~sentiment,scales = "free",ncol = 3)+
theme(legend.position = "none")+coord_flip()+labs(x=,y=,title="The top 10 words under each sentiment category")

fifa_all_sens %>% group_by(word,sentiment) %>% count() %>% bind_tf_idf(word,sentiment,n) %>%
arrange(desc(tf_idf)) %>% group_by(sentiment) %>% top_n(15) %>% ggplot(aes(x=reorder(word,-n),y=n,fill=sentiment))+
geom_col(show.legend = F)+labs(x=NULL,y="tf-idf")+facet_wrap(~sentiment,ncol = 3,scales = "free")+coord_flip()

文本挖掘中重要的tf-idf,具体内容请自行百度,我了解的也不多。

fifa_ngram<-fifa %>% unnest_tokens(bigram,Tweet,token = "ngrams", n=2) %>% select(bigram) %>%
separate(bigram,c("w1","w2"),sep=" ") %>%
filter(!w1 %in% stop_words$word,!w2 %in% stop_words$word) %>% count(w1,w2,sort = T)

fifa_ngram %>% unite(bigram,w1,w2,sep = " ") %>% wordcloud2::wordcloud2()

之前的分词都是单个,现在我们进行两个词的分解:

fifa_ngram %>% filter(w1==worldcup) %>% inner_join(get_sentiments(afinn),by=c(w2="word")) %>%
count(w2,score,sort = T) %>% mutate(contribution=nn*score) %>% arrange(desc(abs(contribution))) %>%
mutate(w2=reorder(w2,contribution)) %>% ggplot(aes(w2,contribution,fill=contribution > 0))+
geom_col(show.legend = F)+coord_flip()

既然已经是进行两个词的分词,那么可以计算单词之间的网路相关性:

big_graph<-na.omit(fifa_ngram) %>% mutate(section=row_number() %/% 10) %>% filter(n>4000) %>%
igraph::graph_from_data_frame() %>% toVisNetworkData()
visNetwork(big_graph$nodes,big_graph$edges) %>% visOptions(highlightNearest = TRUE)

写在最后:

答主第一次撰写文本挖掘相关的东西,很多专业名词还是只是一知半解,很多东西都是从这本书中学来的,推荐大家看一下:

Text Mining with R - A Tidy Approach?

www.tidytextmining.com

很值得一读的一本书,推荐大家看看,英语也还好,暂时没有中文版,如果有时间答主可能会翻译成中文版并放在GitHub上面。


推荐阅读:
相关文章