I am trying to use the twitter API Tweepy to crawl and store Twitter data in a fast/scalable way. I'm most interested in follower/following relationships. Is there a way for me to achieve this faster than I currently am? It seems like with the 15 minute refresh period I can only store maybe a dozen connections per 15 minutes. The tweepy.Cursor line seems to be the bottleneck.
for iters in range(20):
nameSearching = namesToSearch[0]
print("getting followers of " + nameSearching)
for id in tweepy.Cursor(api.followers_ids, screen_name='elonmusk').items(bfs_value):
print(id)
ids.append(id)
print("ids loaded")
namesToSearch.pop(0)
nodeSearching = nodemap[nameSearching]
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…