Code Implementation of "Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction".

Related tags

Text Data & NLPnlp
Overview

Span-ASTE: Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction

***** New March 31th, 2022: Scikit-Style API for Easy Usage *****

PWC Colab Jupyter

This repository implements our ACL 2021 research paper Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction. Our goal is to extract sentiment triplets of the format (aspect target, opinion expression and sentiment polarity), as shown in the diagram below.

Installation

Data Format

Our span-based model uses data files where the format for each line contains one input sentence and a list of output triplets:

sentence#### #### ####[triplet_0, ..., triplet_n]

Each triplet is a tuple that consists of (span_a, span_b, label). Each span is a list. If the span covers a single word, the list will contain only the word index. If the span covers multiple words, the list will contain the index of the first word and last word. For example:

It also has lots of other Korean dishes that are affordable and just as yummy .#### #### ####[([6, 7], [10], 'POS'), ([6, 7], [14], 'POS')]

For prediction, the data can contain the input sentence only, with an empty list for triplets:

sentence#### #### ####[]

Predict Using Model Weights

  • First, download and extract pre-trained weights to pretrained_dir
  • The input data file path_in and output data file path_out have the same data format.
from wrapper import SpanModel

model = SpanModel(save_dir=pretrained_dir, random_seed=0)
model.predict(path_in, path_out)

Model Training

  • Configure the model with save directory and random seed.
  • Start training based on the training and validation data which have the same data format.
model = SpanModel(save_dir=save_dir, random_seed=random_seed)
model.fit(path_train, path_dev)

Model Evaluation

  • From the trained model, predict triplets from the test sentences and output into path_pred.
  • The model includes a scoring function which will provide F1 metric scores for triplet extraction.
model.predict(path_in=path_test, path_out=path_pred)
results = model.score(path_pred, path_test)

Research Citation

If the code is useful for your research project, we appreciate if you cite the following paper:

@inproceedings{xu-etal-2021-learning,
    title = "Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction",
    author = "Xu, Lu  and
      Chia, Yew Ken  and
      Bing, Lidong",
    booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.acl-long.367",
    doi = "10.18653/v1/2021.acl-long.367",
    pages = "4755--4766",
    abstract = "Aspect Sentiment Triplet Extraction (ASTE) is the most recent subtask of ABSA which outputs triplets of an aspect target, its associated sentiment, and the corresponding opinion term. Recent models perform the triplet extraction in an end-to-end manner but heavily rely on the interactions between each target word and opinion word. Thereby, they cannot perform well on targets and opinions which contain multiple words. Our proposed span-level approach explicitly considers the interaction between the whole spans of targets and opinions when predicting their sentiment relation. Thus, it can make predictions with the semantics of whole spans, ensuring better sentiment consistency. To ease the high computational cost caused by span enumeration, we propose a dual-channel span pruning strategy by incorporating supervision from the Aspect Term Extraction (ATE) and Opinion Term Extraction (OTE) tasks. This strategy not only improves computational efficiency but also distinguishes the opinion and target spans more properly. Our framework simultaneously achieves strong performance for the ASTE as well as ATE and OTE tasks. In particular, our analysis shows that our span-level approach achieves more significant improvements over the baselines on triplets with multi-word targets or opinions.",
}
Comments
  • Train model for new data collected from social media

    Train model for new data collected from social media

    Hi, I would like to train this model in a new dataset with another language "Bahasa" as aspects and opinions of them, especially in social media textual data, constitute a span of words with multiple lengths. How to execute the file accordingly?

    opened by Lafandi 7
  • command命令错误

    command命令错误

    {'command': 'cd /home/data2/yj/Span-ASTE && allennlp train outputs/14lap/seed_0/config.jsonnet --serialization-dir outputs/14lap/seed_0/weights --include-package span_model'} /bin/sh: allennlp: 未找到命令,请问这个在什么文件里改,一直没找到。。。

    opened by lzf00 6
  • Retrain with new language

    Retrain with new language

    Hi, I have some questions (sorry if this is some kind of beginners question, I am new in this field). I want to change the word embedder to the BERT that is pretrained with my language (Indonesia, using indobert). Can you give some tips on how to change the embedder to my language? Thanks!

    opened by rdyzakya 5
  • Using the notebook when there is no GPU

    Using the notebook when there is no GPU

    Hello! Thank you for sharing this work! I was wondering how I can use the demo notebook locally when there is no GPU?

    When running the cell under "# Use pretrained SpanModel weights for prediction, " I got this error:

    2022-07-06 12:28:07,840 - INFO - allennlp.common.plugins - Plugin allennlp_models available Traceback (most recent call last): File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/bin/allennlp", line 8, in sys.exit(run()) File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/main.py", line 34, in run main(prog="allennlp") File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/commands/init.py", line 118, in main args.func(args) File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/commands/predict.py", line 205, in _predict predictor = _get_predictor(args) File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/commands/predict.py", line 105, in _get_predictor check_for_gpu(args.cuda_device) File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/common/checks.py", line 131, in check_for_gpu " 'trainer.cuda_device=-1' in the json config file." + torch_gpu_error allennlp.common.checks.ConfigurationError: Experiment specified a GPU but none is available; if you want to run on CPU use the override 'trainer.cuda_device=-1' in the json config file. module 'torch.cuda' has no attribute '_check_driver'

    I changed cuda_device to -1 in the jsonnet files from your folder training_config. But still no luck.

    opened by xiaoqingwan 5
  • Suggestions to run it against other datasets

    Suggestions to run it against other datasets

    Hi! I'm pretty new to deep learning and ASTE.

    Can you please suggest to me the necessary steps to run this against another dataset? Do I need to follow this data structure (https://github.com/xuuuluuu/SemEval-Triplet-data/blob/master/README.md#data-description) on my dataset by labeling it? How can I modify the code on Colab for new datasets? thank you Any other advice?

    Thank you

    opened by Jurys22 4
  • Running problem

    Running problem

    Hello, I have a question, I want to ask you. I use Pycharm to run your project, but report an error in the main.py file, prompt: ModuleNotFoundError: No module named '_jsonnet'. I guess the main reason because import _jsonnet # noqa. Can you tell me a solution? Thank you very much. 微信图片_20211123164149

    opened by FengLingCong13 4
  • Data format

    Data format

    Excuse me,how do you label the data to make the input format be as follows:

    Exactly as posted plus a great value .####Exactly=O as=O posted=O plus=O a=O great=O value=T-POS .=O####Exactly=O as=O posted=O plus=O a=O great=S value=O .=O####[([6], [5], 'POS')] The specs are pretty good too .####The=O specs=T-POS are=O pretty=O good=O too=O .=O####The=O specs=O are=O pretty=O good=S too=O .=O####[([1], [4], 'POS')]

    opened by arroyoaaa 4
  • Interpretation of the results

    Interpretation of the results

    Hello, I was looking at the file in

    /content/Span-ASTE/model_outputs/aste_sample_c7b00b66bf7ec669d23b80879fda043d/predict_dev.jsonl

    I would like to know what are the numbers in the predicted_ner and predicted_relations such as:

    [[0, 0, 1, 1, 'NEG', 2.777, 0.971]]

    What are 2.777 and 0.971 referring to?

    Thank you

    opened by Jurys22 3
  •   I installed the package according to the requirements. I wanted to use the pre trained model to make predictions, but it failed to run.

    I installed the package according to the requirements. I wanted to use the pre trained model to make predictions, but it failed to run.

    I installed the package according to the requirements. I wanted to use the pre trained model to make predictions, but it failed to run. Two error was reported: 1. allennlp.common.checks.ConfigurationError: Extra parameters passed to SpanModel: {'relation_head_type': 'proper', 'use_bilstm_after_embedder': False, 'use_double_mix_embedder': False, 'use_ner_embeds': False} Traceback (most recent call last): File "X:\workspace\python\[email protected]\Span-ASTE\aste\test.py", line 4, in model.predict('test.txt', "pred.txt") File "X:\workspace\python\[email protected]\Span-ASTE\aste\wrapper.py", line 83, in predict with open(path_temp_out) as f: 2. FileNotFoundError: [Errno 2] No such file or directory: 'X:\workspace\python\papercode\@aspect\Span-ASTE\pretrained_dir\temp_data\pred_out.json'

    opened by SiriusXT 2
  • IndexError: List assignment index out of range

    IndexError: List assignment index out of range

    I've annotated my own data and tried to train the model with the annotated data, and run into this error here (see below). The command runs successfully, but the model doesn't train on the annotated data, going into the out.log files we see this error. The annotated data follows the correct format as I'm able to preview it in the Data Exploration command. Any help would be appreciated please! :)

    image

    opened by jasonhuynh83 2
  • No such file or directory: 'pretrained_14res/temp_data/pred_out.json'

    No such file or directory: 'pretrained_14res/temp_data/pred_out.json'

    Installed it successfully in MAC OS but getting the error pred_out.json not found. Not sure why is this working successfully in colab but not when I am installing it in my local machine. Can any one help me . I have downloaded the folder correctly. Contains all the required files. I have tried with 14lap and 14res but both have same issue. Screenshot 2022-09-22 at 7 48 09 PM

    opened by dipanmoy 2
  • python wrapper.py

    python wrapper.py

    hi ,I'm puzzled when running wrapper.py, the following appears which I can't understand NAME wrapper.py

    SYNOPSIS wrapper.py GROUP | COMMAND

    GROUPS GROUP is one of the following:

     json
       JSON (JavaScript Object Notation) <http://json.org> is a subset of JavaScript syntax (ECMA-262 3rd edition) used as a lightweight data interchange format.
    
     os
       OS routines for NT or Posix depending on what system we're on.
    
     shutil
       Utility functions for copying and archiving files and directory trees.
    
     sys
       This module provides access to some objects used or maintained by the interpreter and to functions that interact strongly with the interpreter.
    
     List
       The central part of internal API.
    
     Tuple
       Tuple type; Tuple[X, Y] is the cross-product type of X and Y.
    
     Optional
       Internal indicator of special typing constructs. See _doc instance attribute for specific docs.
    

    COMMANDS COMMAND is one of the following:

     Namespace
       Simple object for storing attributes.
    
     Path
       PurePath subclass that can make system calls.
    
     train_model
       Trains the model specified in the given [`Params`](../common/params.md#params) object, using the data and training parameters also specified in that object, and saves the results in `serialization_dir`
    
    opened by xian-xian 2
  •  ConfigurationError: key

    ConfigurationError: key "dataset_reader" is required

    I was trying to replicate the same to Azure Databricks. While I'm training to train the model, I am getting the "ConfigurationError: key "dataset_reader" is required" error. For your reference

    image image image image

    Is this solution can be implemented in the Databricks environment ? @chiayewken

    opened by tsharisaravanan 1
  • Optional: Set up NLTK packages这个是什么意思呀,可以帮忙讲解一下吗

    Optional: Set up NLTK packages这个是什么意思呀,可以帮忙讲解一下吗

    Optional: Set up NLTK packages

    if [[ -f punkt.zip ]]; then mkdir -p /home/admin/nltk_data/tokenizers cp punkt.zip /home/admin/nltk_data/tokenizers fi if [[ -f wordnet.zip ]]; then mkdir -p /home/admin/nltk_data/corpora cp wordnet.zip /home/admin/nltk_data/corpora fi 不明白这个什么意思,研一学生求求了

    opened by xian-xian 5
  • An error for Posixpath

    An error for Posixpath

    Hi, I have some questions to ask you.

    The params_file is a string type, but this error has occurred as follow:

    Traceback (most recent call last): File "/Span-ASTE-main/aste/wrapper.py", line 177, in model.fit(path_train, path_dev) File "/Span-ASTE-main/aste/wrapper.py", line 54, in fit test_data_path=str(self.save_temp_data(path_dev, "dev")), File "/lib/python3.7/site-packages/allennlp/common/params.py", line 462, in from_file file_dict = json.loads(evaluate_file(params_file, ext_vars=ext_vars)) TypeError: argument 1 must be str, not PosixPath

    By the way, what should I start your code, the "main.py" or "wrapper.py".

    opened by Chen-PengF 1
  • demo file not working, No module named 'data_utils', No module named 'data_utils'

    demo file not working, No module named 'data_utils', No module named 'data_utils'

    Hi,

    I tried to run the demo file, but it shows error of "No module named 'data_utils'". The error coming from the line "No module named 'data_utils'"

    opened by qi-xia 1
Owner
Chia Yew Ken
Hi! I'm a 2nd year PhD Student with SUTD and Alibaba. My research interests currently include zero-shot learning, structured prediction and sentiment analysis.
Chia Yew Ken
Levenshtein and Hamming distance computation

distance - Utilities for comparing sequences This package provides helpers for computing similarities between arbitrary sequences. Included metrics ar

112 Dec 22, 2022
BiQE: Code and dataset for the BiQE paper

BiQE: Bidirectional Query Embedding This repository includes code for BiQE and the datasets introduced in Answering Complex Queries in Knowledge Graph

Bhushan Kotnis 1 Oct 20, 2021
Spert NLP Relation Extraction API deployed with torchserve for inference

URLMask Python program for Linux users to change a URL to ANY domain. A program than can take any url and mask it to any domain name you like. E.g. ne

Zichu Chen 1 Nov 24, 2021
Tevatron is a simple and efficient toolkit for training and running dense retrievers with deep language models.

Tevatron Tevatron is a simple and efficient toolkit for training and running dense retrievers with deep language models. The toolkit has a modularized

texttron 193 Jan 04, 2023
Machine learning classifiers to predict American Sign Language .

ASL-Classifiers American Sign Language (ASL) is a natural language that serves as the predominant sign language of Deaf communities in the United Stat

Tarek idrees 0 Feb 08, 2022
Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning on image-text and video-text tasks

Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning on image-text and video-text tasks. It takes raw videos/images + text as inputs, and outputs task predictions. ClipB

Jie Lei 雷杰 612 Jan 04, 2023
A python gui program to generate reddit text to speech videos from the id of any post.

Reddit text to speech generator A python gui program to generate reddit text to speech videos from the id of any post. Current functionality Generate

Aadvik 17 Dec 19, 2022
DeepSpeech - Easy-to-use Speech Toolkit including SOTA ASR pipeline, influential TTS with text frontend and End-to-End Speech Simultaneous Translation.

(简体中文|English) Quick Start | Documents | Models List PaddleSpeech is an open-source toolkit on PaddlePaddle platform for a variety of critical tasks i

5.6k Jan 03, 2023
PyTorch implementation of convolutional neural networks-based text-to-speech synthesis models

Deepvoice3_pytorch PyTorch implementation of convolutional networks-based text-to-speech synthesis models: arXiv:1710.07654: Deep Voice 3: Scaling Tex

Ryuichi Yamamoto 1.8k Dec 30, 2022
LSTM model - IMDB review sentiment analysis

NLP - Movie review sentiment analysis The colab notebook contains the code for building a LSTM Recurrent Neural Network that gives 87-88% accuracy on

Sundeep Bhimireddy 1 Jan 29, 2022
One Stop Anomaly Shop: Anomaly detection using two-phase approach: (a) pre-labeling using statistics, Natural Language Processing and static rules; (b) anomaly scoring using supervised and unsupervised machine learning.

One Stop Anomaly Shop (OSAS) Quick start guide Step 1: Get/build the docker image Option 1: Use precompiled image (might not reflect latest changes):

Adobe, Inc. 148 Dec 26, 2022
Codes for processing meeting summarization datasets AMI and ICSI.

Meeting Summarization Dataset Meeting plays an essential part in our daily life, which allows us to share information and collaborate with others. Wit

xcfeng 39 Dec 14, 2022
Gold standard corpus annotated with verb-preverb connections for Hungarian.

Hungarian Preverb Corpus A gold standard corpus manually annotated with verb-preverb connections for Hungarian. corpus The corpus consist of the follo

RIL Lexical Knowledge Representation Research Group 3 Jan 27, 2022
test

Lidar-data-decode In this project, you can decode your lidar data frame(pcap file) and make your own datasets(test dataset) in Windows without any hug

46 Dec 05, 2022
An ultra fast tiny model for lane detection, using onnx_parser, TensorRTAPI, torch2trt to accelerate. our model support for int8, dynamic input and profiling. (Nvidia-Alibaba-TensoRT-hackathon2021)

Ultra_Fast_Lane_Detection_TensorRT An ultra fast tiny model for lane detection, using onnx_parser, TensorRTAPI to accelerate. our model support for in

steven.yan 121 Dec 27, 2022
Sequence modeling benchmarks and temporal convolutional networks

Sequence Modeling Benchmarks and Temporal Convolutional Networks (TCN) This repository contains the experiments done in the work An Empirical Evaluati

CMU Locus Lab 3.5k Jan 03, 2023
Unsupervised Document Expansion for Information Retrieval with Stochastic Text Generation

Unsupervised Document Expansion for Information Retrieval with Stochastic Text Generation Official Code Repository for the paper "Unsupervised Documen

NLP*CL Laboratory 2 Oct 26, 2021
HAIS_2GNN: 3D Visual Grounding with Graph and Attention

HAIS_2GNN: 3D Visual Grounding with Graph and Attention This repository is for the HAIS_2GNN research project. Tao Gu, Yue Chen Introduction The motiv

Yue Chen 1 Nov 26, 2022
YACLC - Yet Another Chinese Learner Corpus

汉语学习者文本多维标注数据集YACLC V1.0 中文 | English 汉语学习者文本多维标注数据集(Yet Another Chinese Learner

BLCU-ICALL 47 Dec 15, 2022
Natural language computational chemistry command line interface.

nlcc Install pip install nlcc Must have Open-AI Codex key: export OPENAI_API_KEY=your key here then nlcc key bindings ctrl-w copy to clipboard (Note

Andrew White 37 Dec 14, 2022