TLA - Twitter Linguistic Analysis

Related tags

Text Data & NLPTLA
Overview

TLA - Twitter Linguistic Analysis

Tool for linguistic analysis of communities

TLA is built using PyTorch, Transformers and several other State-of-the-Art machine learning techniques and it aims to expedite and structure the cumbersome process of collecting, labeling, and analyzing data from Twitter for a corpus of languages while providing detailed labeled datasets for all the languages. The analysis provided by TLA will also go a long way in understanding the sentiments of different linguistic communities and come up with new and innovative solutions for their problems based on the analysis. List of languages our library provides support for are listed as follows:

Language Code Language Code
English en Hindi hi
Swedish sv Thai th
Dutch nl Japanese ja
Turkish tr Urdu ur
Indonesian id Portuguese pt
French fr Chinese zn-ch
Spanish es Persian fa
Romainain ro Russian ru

Features

  • Provides 16 labeled Datasets for different languages for analysis.
  • Implements Bert based architecture to identify languages.
  • Provides Functionalities to Extract,process and label tweets from twitter.
  • Provides a Random Forest classifier to implement sentiment analysis on any string.

Installation :

pip install --upgrade https://github.com/tusharsarkar3/TLA.git

Overview

Extract data
from TLA.Data.get_data import store_data
store_data('en',False)

This will extract and store the unlabeled data in a new directory inside data named datasets.

Label data
from TLA.Datasets.get_lang_data import language_data
df = language_data('en')
print(df)

This will print the labeled data that we have already collected.

Classify languages
Training

Training can be done in the following way:

from TLA.Lang_Classify.train import train_lang
train_lang(path_to_dataset,epochs)
Prediction

Inference is done in the following way:

from TLA.Lang_Classify.predict import predict
model = get_model(path_to_weights)
preds = predict(dataframe_to_be_used,model)
Analyse
Training

Training can be done in the following way:

from TLA.Analyse.train_rf import train_rf
train_rf(path_to_dataset)

This will store all the vectorizers and models in a seperate directory named saved_rf and saved_vec and they are present inside Analysis directory. Further instructions for training multiple languages is given in the next section which shows how to run the commands using CLI

Final Analysis

Analysis is done in the following way:

from TLA.Analysis.analyse import analyse_data 
analyse_data(path_to_weights)

This will store the final analysis as .csv inside a new directory named analysis.

Overview with Git

Installation another method
git clone https://github.com/tusharsarkar3/TLA.git
Extract data Navigate to the required directory
cd Data

Run the following command:

python get_data.py --lang en --process True

Lang flag is used to input the language of the dataset that is required and process flag shows where pre-processing should be done before returning the data. Give the following codes in the lang flag wrt the required language:

Loading Dataset

To load a dataset run the following command in python.

df= pd.read_csv("TLA/TLA/Datasets/get_data_en.csv")
 

The command will return a dataframe consisting of the data for the specific language requested.

In the phrase get_data_en, en can be sunstituted by the desired language code to load the dataframe for the specific language.

Pre-Processing

To preprocess a given string run the following command.

In your terminal use code

cd Data

then run the command in python

from TLA.Data import Pre_Process_Tweets

df=Pre_Process_Tweets.pre_process_tweet(df)

Here the function pre_process_tweet takes an input as a dataframe of tweets and returns an output of a dataframe with the list of preprocessed words for a particular tweet next to the tweet in the dataframe.

Analysis Training To train a random forest classifier for the purpose of sentiment analysis run the following command in your terminal.
cd Analysis

then

python train.rf --path "path to your datafile" --train_all_datasets False

here the --path flag represents the path to the required dataset you want to train the Random Forest Classifier on the --train_all_datasets flag is a boolean which can be used to train the model on multiple datasets at once.

The output is a file with the a .pkl file extention saved in the folder at location "TLA\Analysis\saved_rf{}.pkl" The output for vectorization of is stored in a .pkl file in the directory "TLA\Analysis\saved_vec{}.pkl"

Get Sentiment

To get the sentiment of any string use the following code.

In your terminal type

cd Analysis

then in your terminal type

python get_sentiment.py --prediction "Your string for prediction to be made upon" --lang "en"

here the --prediction flag collects the string for which you want to get the sentiment for. the --lang represents the language code representing the language you typed your string in.

The output is a sentiment which is either positive or negative depending on your string.

Statistics

To get a comprehensive statistic on sentiment of datasets run the following command.

In your terminal type

cd Analysis

then

python analyse.py 

This will give you an output of a table1.csv file at the location 'TLA\Analysis\analysis\table1.csv' comprising of statistics relating to the percentage of positive or negative tweets for a given language dataset.

It will also give a table2.csv file at 'TLA\Analysis\analysis\table2.csv' comprising of statistics for all languages combined.

Language Classification Training To train a model for language classfication on a given dataset run the following commands.

In your terminal run

cd Lang_Classify

then run

python train.py --data "path for your dataset" --model "path to weights if pretrained" --epochs 4

The --data flag requires the path to your training dataset.

The --model flag requires the path to the model you want to implement

The --epoch flag represents the epochs you want to train your model for.

The output is a file with a .pt extention named saved_wieghts_full.pt where your trained wieghst are stored.

Prediction To make prediction on any given string Us ethe following code.

In your terminal type

cd Lang_Classify

then run the code

python predict.py --predict "Text/DataFrame for language to predicted" --weights " Path for the stored weights of your model " 

The --predict flag requires the string you want to get the language for.

The --wieghts flag is the path for the stored wieghts you want to run your model on to make predictions.

The outputs is the language your string was typed in.


Results:

img

Performance of TLA ( Loss vs epochs)

Language Total tweets Positive Tweets Percentage Negative Tweets Percentage
English 500 66.8 33.2
Spanish 500 61.4 38.6
Persian 50 52 48
French 500 53 47
Hindi 500 62 38
Indonesian 500 63.4 36.6
Japanese 500 85.6 14.4
Dutch 500 84.2 15.8
Portuguese 500 61.2 38.8
Romainain 457 85.55 14.44
Russian 213 62.91 37.08
Swedish 420 80.23 19.76
Thai 424 71.46 28.53
Turkish 500 67.8 32.2
Urdu 42 69.04 30.95
Chinese 500 80.6 19.4

Reference:

@misc{sarkar2021tla,
     title={TLA: Twitter Linguistic Analysis}, 
     author={Tushar Sarkar and Nishant Rajadhyaksha},
     year={2021},
     eprint={2107.09710},
     archivePrefix={arXiv},
     primaryClass={cs.CL}
}
@misc{640cba8b-35cb-475e-ab04-62d079b74d13,
 title = {TLA: Twitter Linguistic Analysis},
 author = {Tushar Sarkar and Nishant Rajadhyaksha},
  journal = {Software Impacts},
 doi = {10.24433/CO.6464530.v1}, 
 howpublished = {\url{https://www.codeocean.com/}},
 year = 2021,
 month = {6},
 version = {v1}
}

Features to be added :

  • Access to more language
  • Creating GUI based system for better accesibility
  • Improving performance of the baseline model

Developed by Tushar Sarkar and Nishant Rajadhyaksha

Owner
Tushar Sarkar
I love solving problems with data
Tushar Sarkar
Sorce code and datasets for "K-BERT: Enabling Language Representation with Knowledge Graph",

K-BERT Sorce code and datasets for "K-BERT: Enabling Language Representation with Knowledge Graph", which is implemented based on the UER framework. R

Weijie Liu 834 Jan 09, 2023
This repo is to provide a list of literature regarding Deep Learning on Graphs for NLP

This repo is to provide a list of literature regarding Deep Learning on Graphs for NLP

Graph4AI 230 Nov 22, 2022
Just a basic Telegram AI chat bot written in Python using Pyrogram.

Nikko ChatBot Just a basic Telegram AI chat bot written in Python using Pyrogram. Requirements Python 3.7 or higher. A bot token. Installation $ https

ʀᴇxɪɴᴀᴢᴏʀ 2 Oct 21, 2022
Nateve compiler developed with python.

Adam Adam is a Nateve Programming Language compiler developed using Python. Nateve Nateve is a new general domain programming language open source ins

Nateve 7 Jan 15, 2022
Code for the paper PermuteFormer

PermuteFormer This repo includes codes for the paper PermuteFormer: Efficient Relative Position Encoding for Long Sequences. Directory long_range_aren

Peng Chen 42 Mar 16, 2022
MiCECo - Misskey Custom Emoji Counter

MiCECo Misskey Custom Emoji Counter Introduction This little script counts custo

7 Dec 25, 2022
Baseline code for Korean open domain question answering(ODQA)

Open-Domain Question Answering(ODQA)는 다양한 주제에 대한 문서 집합으로부터 자연어 질의에 대한 답변을 찾아오는 task입니다. 이때 사용자 질의에 답변하기 위해 주어지는 지문이 따로 존재하지 않습니다. 따라서 사전에 구축되어있는 Knowl

VUMBLEB 69 Nov 04, 2022
💬 Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants

Rasa Open Source Rasa is an open source machine learning framework to automate text-and voice-based conversations. With Rasa, you can build contextual

Rasa 15.3k Dec 30, 2022
A relatively simple python program to generate one of those reddit text to speech videos dominating youtube.

Reddit text to speech generator A basic reddit tts video generator Current functionality Generate videos for subs based on comments,(askreddit) so rea

Aadvik 17 Dec 19, 2022
Need: Image Search With Python

Need: Image Search The problem is that a user needs to search for a specific ima

Surya Komandooru 1 Dec 30, 2021
Multi-Scale Temporal Frequency Convolutional Network With Axial Attention for Speech Enhancement

MTFAA-Net Unofficial PyTorch implementation of Baidu's MTFAA-Net: "Multi-Scale Temporal Frequency Convolutional Network With Axial Attention for Speec

Shimin Zhang 87 Dec 19, 2022
A Structured Self-attentive Sentence Embedding

Structured Self-attentive sentence embeddings Implementation for the paper A Structured Self-Attentive Sentence Embedding, which was published in ICLR

Kaushal Shetty 488 Nov 28, 2022
The official implementation of "BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Identify Analogies?, ACL 2021 main conference"

BERT is to NLP what AlexNet is to CV This is the official implementation of BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Iden

Asahi Ushio 20 Nov 03, 2022
DataCLUE: 国内首个以数据为中心的AI测评(含模型分析报告)

DataCLUE 以数据为中心的AI测评(DataCLUE) DataCLUE: A Chinese Data-centric Language Evaluation Benchmark 内容导引 章节 描述 简介 介绍以数据为中心的AI测评(DataCLUE)的背景 任务描述 任务描述 实验结果

CLUE benchmark 135 Dec 22, 2022
Perform sentiment analysis and keyword extraction on Craigslist listings

craiglist-helper synopsis Perform sentiment analysis and keyword extraction on Craigslist listings Background I love Craigslist. I've found most of my

Mark Musil 1 Nov 08, 2021
Precision Medicine Knowledge Graph (PrimeKG)

PrimeKG Website | bioRxiv Paper | Harvard Dataverse Precision Medicine Knowledge Graph (PrimeKG) presents a holistic view of diseases. PrimeKG integra

Machine Learning for Medicine and Science @ Harvard 103 Dec 10, 2022
An open-source NLP research library, built on PyTorch.

An Apache 2.0 NLP research library, built on PyTorch, for developing state-of-the-art deep learning models on a wide variety of linguistic tasks. Quic

AI2 11.4k Jan 01, 2023
一个基于Nonebot2和go-cqhttp的娱乐性qq机器人

Takker - 一个普通的QQ机器人 此项目为基于 Nonebot2 和 go-cqhttp 开发,以 Sqlite 作为数据库的QQ群娱乐机器人 关于 纯兴趣开发,部分功能借鉴了大佬们的代码,作为Q群的娱乐+功能性Bot 声明 此项目仅用于学习交流,请勿用于非法用途 这是开发者的第一个Pytho

风屿 79 Dec 29, 2022
Amazon Multilingual Counterfactual Dataset (AMCD)

Amazon Multilingual Counterfactual Dataset (AMCD)

35 Sep 20, 2022
Get list of common stop words in various languages in Python

Python Stop Words Table of contents Overview Available languages Installation Basic usage Python compatibility Overview Get list of common stop words

Alireza Savand 142 Dec 21, 2022