This project consists of a set of python scripts. The first script uses the Tweepy API to mine/fetch data from a specified Twitter account, the script grabs 200 of the specified account's latest tweets and stores the data for each one, such as the text, hashtags, number of retweets, and so on.
Here's an example of a tweet that has been fetched using Tweepy:
"1060934916651790336": {
"hashtags": [],
"created_at": "2018-11-09 16:39:30",
"retweet_count": 9,
"text": "Klokt skrevet @FRI_HET https://t.co/dSVhfCJmpo",
"user_mentions": [
{
"id_str": "1683115626",
"name": "FRI",
"id": 1683115626,
"screen_name": "FRI_HET",
"indices": [
14,
22
]
}
],
"retweeted": false,
"favorite_count": 34,
"url": "https://twitter.com/erna_solberg/status/1060934916651790336"
},
After the data has been mined by the first script, it will store that data into a .json file, like in the example above, for further processing. That .json file will be used as input for the translation and analysation scripts. Those scripts translate and analyse the data by using Google Cloud APIs and the AYLIEN Sentiment Analysis API. The resulting data would then be put on the website shown in the video.