This is a Python binding to the tokenizer Ucto. Tokenisation is one of the first step in almost any Natural Language Processing task, yet it is not always as trivial a task as it appears to be. This binding makes the power of the ucto tokeniser available to Python. Ucto itself is regular-expression based, extensible, and advanced tokeniser written in C++ (http://ilk.uvt.nl/ucto).

Overview
http://applejack.science.ru.nl/lamabadge.php/python-ucto Project Status: Active – The project has reached a stable, usable state and is being actively developed.

Ucto for Python

This is a Python binding to the tokeniser Ucto. Tokenisation is one of the first step in almost any Natural Language Processing task, yet it is not always as trivial a task as it appears to be. This binding makes the power of the ucto tokeniser available to Python. Ucto itself is a regular-expression based, extensible, and advanced tokeniser written in C++ (https://languagemachines.github.io/ucto).

Installation

Easy

Manual (Advanced)

  • Make sure to first install ucto itself (https://languagemachines.github.io/ucto) and all its dependencies.
  • Install Cython if not yet available on your system: $ sudo apt-get cython cython3 (Debian/Ubuntu, may differ for others)
  • Clone this repository and run: $ sudo python setup.py install (Make sure to use the desired version of python)

Advanced note: If the ucto libraries and includes are installed in a non-standard location, you can set environment variables INCLUDE_DIRS and LIBRARY_DIRS to point to them prior to invocation of setup.py install.

Usage

Import and instantiate the Tokenizer class with a configuration file.

import ucto
configurationfile = "tokconfig-eng"
tokenizer = ucto.Tokenizer(configurationfile)

The configuration files supplied with ucto are named tokconfig-xxx where xxx corresponds to a three letter iso-639-3 language code. There is also a tokconfig-generic one that has no language-specific rules. Alternatively, you can make and supply your own configuration file. Note that for older versions of ucto you may need to provide the absolute path, but the latest versions will find the configurations supplied with ucto automatically. See here for a list of available configuration in the latest version.

The constructor for the Tokenizer class takes the following keyword arguments:

  • lowercase (defaults to False) -- Lowercase all text
  • uppercase (defaults to False) -- Uppercase all text
  • sentenceperlineinput (defaults to False) -- Set this to True if each sentence in your input is on one line already and you do not require further sentence boundary detection from ucto.
  • sentenceperlineoutput (defaults to False) -- Set this if you want each sentence to be outputted on one line. Has not much effect within the context of Python.
  • paragraphdetection (defaults to True) -- Do paragraph detection. Paragraphs are simply delimited by an empty line.
  • quotedetection (defaults to False) -- Set this if you want to enable the experimental quote detection, to detect quoted text (enclosed within some sort of single/double quote)
  • debug (defaults to False) -- Enable verbose debug output

Text is passed to the tokeniser using the process() method, this method returns the number of tokens rather than the tokens itself. It may be called multiple times in sequence. The tokens themselves will be buffered in the Tokenizer instance and can be obtained by iterating over it, after which the buffer will be cleared:

#pass the text (a str) (may be called multiple times),
tokenizer.process(text)

#read the tokenised data
for token in tokenizer:
    #token is an instance of ucto.Token, serialise to string using str()
    print(str(token))

    #tokens remember whether they are followed by a space
    if token.isendofsentence():
        print()
    elif not token.nospace():
        print(" ",end="")

The process() method takes a single string (str), as parameter. The string may contain newlines, and newlines are not necessary sentence bounds unless you instantiated the tokenizer with sentenceperlineinput=True.

Each token is an instance of ucto.Token. It can be serialised to string using str() as shown in the example above.

The following methods are available on ucto.Token instances: * isendofsentence() -- Returns a boolean indicating whether this is the last token of a sentence. * nospace() -- Returns a boolean, if True there is no space following this token in the original input text. * isnewparagraph() -- Returns True if this token is the start of a new paragraph. * isbeginofquote() * isendofquote() * tokentype -- This is an attribute, not a method. It contains the type or class of the token (e.g. a string like WORD, ABBREVIATION, PUNCTUATION, URL, EMAIL, SMILEY, etc..)

In addition to the low-level process() method, the tokenizer can also read an input file and produce an output file, in the same fashion as ucto itself does when invoked from the command line. This is achieved using the tokenize(inputfilename, outputfilename) method:

tokenizer.tokenize("input.txt","output.txt")

Input and output files may be either plain text, or in the FoLiA XML format. Upon instantiation of the Tokenizer class, there are two keyword arguments to indicate this:

  • xmlinput or foliainput -- A boolean that indicates whether the input is FoLiA XML (True) or plain text (False). Defaults to False.
  • xmloutput or foliaoutput -- A boolean that indicates whether the input is FoLiA XML (True) or plain text (False). Defaults to False. If this option is enabled, you can set an additional keyword parameter docid (string) to set the document ID.

An example for plain text input and FoLiA output:

tokenizer = ucto.Tokenizer(configurationfile, foliaoutput=True)
tokenizer.tokenize("input.txt", "ucto_output.folia.xml")

FoLiA documents retain all the information ucto can output, unlike the plain text representation. These documents can be read and manipulated from Python using the FoLiaPy library. FoLiA is especially recommended if you intend to further enrich the document with linguistic annotation. A small example of reading ucto's FoLiA output using this library follows, but consult the documentation for more:

import folia.main as folia
doc = folia.Document(file="ucto_output.folia.xml")
for paragraph in doc.paragraphs():
    for sentence in paragraph.sentence():
        for word in sentence.words()
            print(word.text(), end="")
            if word.space:
                print(" ", end="")
        print()
    print()

Test and Example

Run and inspect example.py.

Comments
  • undefined symbol: ...

    undefined symbol: ...

    Hi there,

    I have a clean ucto installation from sudo apt install ucto. When I compile the python extension, however, I can't import it since it fails with:

    ImportError: /home/manjavacas/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/ucto.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN9Tokenizer14TokenizerClass4initERKSs
    

    Not sure what might be going bad, since ucto works perfectly fine and the extension manages to compile without errors.

    Any ideas?

    question 
    opened by emanjavacas 8
  • Compilation fails after latest ucto release

    Compilation fails after latest ucto release

        gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/home/proycon/envs/dev
    /include -I/usr/include/ -I/usr/include/libxml2 -I/usr/local/include/ -I/home/proycon/envs/dev/include -I/usr/include/python3.10 -c ucto_wrapper.cpp -o build/temp.linux-x86_64-3.10/ucto_wrapper.o --std=c++0x -D U_USING_ICU_NAMESPACE=1
        ucto_wrapper.cpp: In function ‘PyObject* __pyx_gb_4ucto_9Tokenizer_8generator(__pyx_CoroutineObject*, PyThreadState*, PyObject*)’:
        ucto_wrapper.cpp:3750:86: error: no match for ‘operator=’ (operand types are ‘std::vector<std::__cxx11::basic_string<char> >’ and ‘std::vector<icu_70::UnicodeString>’)
         3750 |   __pyx_cur_scope->__pyx_v_results = __pyx_cur_scope->__pyx_v_self->tok.getSentences();
    
    bug 
    opened by proycon 3
  • Tokenizer does not return lowercase tokens when lowercase = True

    Tokenizer does not return lowercase tokens when lowercase = True

    When I call tokenizer with lowercase True, the output contains tokens with uppercase.

    t = ucto.Tokenizer("tokconfig-nld",lowercase = True,sentencedetection=False,paragraphdetection=False)
    ucto: textcat configured from: /vol/customopt/lamachine.stable/share/ucto/textcat.cfg

    z = x.article_set.all()[0]

    t.process(z.text)

    [str(token) for token in t]

    ["'", 'oor', 'onze', 'redacteur', 'mr.', 'F.', 'KUITENBROUWER', 'AMSTERDAM',

    bug 
    opened by martijnbentum 3
  • Manual installation fails: config.h: no such file or directory

    Manual installation fails: config.h: no such file or directory

    I’ve tried to follow the manual installation instructions on Ubuntu 16.04, but it seems to be missing a file:

    [email protected]:~/git/python-ucto$ git status
    On branch master
    Your branch is up-to-date with 'origin/master'.
    nothing to commit, working directory clean
    [email protected]:~/git/python-ucto$ uname -a
    Linux unut 4.4.0-124-generic #148-Ubuntu SMP Wed May 2 13:00:18 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
    [email protected]:~/git/python-ucto$ sudo python setup.py install 
    /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'install_requires'
      warnings.warn(msg)
    running install
    running build
    running build_ext
    cythoning ucto_wrapper2.pyx to ucto_wrapper2.cpp
    building 'ucto' extension
    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/usr/include/ -I/usr/include/libxml2 -I/usr/local/include/ -I/usr/include/python2.7 -c ucto_wrapper2.cpp -o build/temp.linux-x86_64-2.7/ucto_wrapper2.o --std=c++0x -D U_USING_ICU_NAMESPACE=1
    cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
    In file included from ucto_wrapper2.cpp:457:0:
    /usr/include/ucto/tokenize.h:33:20: fatal error: config.h: No such file or directory
    compilation terminated.
    error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
    
    opened by texttheater 3
  • TokenRole has no attribute ENDOFQUOTE

    TokenRole has no attribute ENDOFQUOTE

    Hi there, I noticed that isendofquote seems to be broken.

    Seems like a typo on this line:

    https://github.com/proycon/python-ucto/blob/65a7f03a92f60fa28e330a5fb735d75230cdbec4/ucto_wrapper.pyx#L29

    which should be rather ENDOFQUOTE.

    bug 
    opened by emanjavacas 1
  • Question: possible to retrieve untokenized sentences?

    Question: possible to retrieve untokenized sentences?

    May sound silly, but would it be possible to create a method that would allow retrieving sentences from the tokenizer without whitespace between punctuation marks (e.g. untokenized)? E.g. maybe providing a tuple that would hold two versions of a sentence, both the tokenized, as well as the original?

    It is practical to keep the untokenized sentence in some scenarios (e.g. showing them to end users), and reconstructing it by script would be rather hacky and imprecise I guess.

    enhancement 
    opened by pirolen 1
Releases(v0.6.1)
Owner
Maarten van Gompel
Research software engineer - NLP - AI - 🐧 Linux & open-source enthusiast - 🐍 Python/ 🌊C/C++ / 🦀 Rust / 🐚 Shell - 🔐 Privacy, Security & Decentralisation
Maarten van Gompel
Open-Source Toolkit for End-to-End Speech Recognition leveraging PyTorch-Lightning and Hydra.

OpenSpeech provides reference implementations of various ASR modeling papers and three languages recipe to perform tasks on automatic speech recogniti

Soohwan Kim 26 Dec 14, 2022
تولید اسم های رندوم فینگیلیش

karafs کرفس تولید اسم های رندوم فینگیلیش installation ➜ pip install karafs usage دو زبانه ➜ karafs -n 10 توت فرنگی بی ناموس toot farangi-ye bi_namoos

Vaheed NÆINI (9E) 36 Nov 24, 2022
小布助手对话短文本语义匹配的一个baseline

oppo-text-match 小布助手对话短文本语义匹配的一个baseline 模型 参考:https://kexue.fm/archives/8213 base版本线下大概0.952,线上0.866(单模型,没做K-flod融合)。 训练 测试环境:tensorflow 1.15 + keras

苏剑林(Jianlin Su) 132 Dec 14, 2022
Edge-Augmented Graph Transformer

Edge-augmented Graph Transformer Introduction This is the official implementation of the Edge-augmented Graph Transformer (EGT) as described in https:

Md Shamim Hussain 21 Dec 14, 2022
Python generation script for BitBirds

BitBirds generation script Intro This is published under MIT license, which means you can do whatever you want with it - entirely at your own risk. Pl

286 Dec 06, 2022
Binary LSTM model for text classification

Text Classification The purpose of this repository is to create a neural network model of NLP with deep learning for binary classification of texts re

Nikita Elenberger 1 Mar 11, 2022
Which Apple Keeps Which Doctor Away? Colorful Word Representations with Visual Oracles

Which Apple Keeps Which Doctor Away? Colorful Word Representations with Visual Oracles (TASLP 2022)

Zhuosheng Zhang 3 Apr 14, 2022
DLO8012: Natural Language Processing & CSL804: Computational Lab - II

NATURAL-LANGUAGE-PROCESSING-AND-COMPUTATIONAL-LAB-II DLO8012: NLP & CSL804: CL-II [SEMESTER VIII] Syllabus NLP - Reference Books THE WALL MEGA SATISH

AMEY THAKUR 7 Apr 28, 2022
SpikeX - SpaCy Pipes for Knowledge Extraction

SpikeX is a collection of pipes ready to be plugged in a spaCy pipeline. It aims to help in building knowledge extraction tools with almost-zero effort.

Erre Quadro Srl 384 Dec 12, 2022
Tools and data for measuring the popularity & growth of various programming languages.

growth-data Tools and data for measuring the popularity & growth of various programming languages. Install the dependencies $ pip install -r requireme

3 Jan 06, 2022
Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition

SEW (Squeezed and Efficient Wav2vec) The repo contains the code of the paper "Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speec

ASAPP Research 67 Dec 01, 2022
GAP-text2SQL: Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training

GAP-text2SQL: Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training Code and model from our AAAI 2021 paper

Amazon Web Services - Labs 83 Jan 09, 2023
Comprehensive-E2E-TTS - PyTorch Implementation

A Non-Autoregressive End-to-End Text-to-Speech (text-to-wav), supporting a family of SOTA unsupervised duration modelings. This project grows with the research community, aiming to achieve the ultima

Keon Lee 114 Nov 13, 2022
Topic Inference with Zeroshot models

zeroshot_topics Table of Contents Installation Usage License Installation zeroshot_topics is distributed on PyPI as a universal wheel and is available

Rita Anjana 55 Nov 28, 2022
Idea is to build a model which will take keywords as inputs and generate sentences as outputs.

keytotext Idea is to build a model which will take keywords as inputs and generate sentences as outputs. Potential use case can include: Marketing Sea

Gagan Bhatia 364 Jan 03, 2023
Python SDK for working with Voicegain Speech-to-Text

Voicegain Speech-to-Text Python SDK Python SDK for the Voicegain Speech-to-Text API. This API allows for large vocabulary speech-to-text transcription

Voicegain 3 Dec 14, 2022
[KBS] Aspect-based sentiment analysis via affective knowledge enhanced graph convolutional networks

#Sentic GCN Introduction This repository was used in our paper: Aspect-Based Sentiment Analysis via Affective Knowledge Enhanced Graph Convolutional N

Akuchi 35 Nov 16, 2022
An open source framework for seq2seq models in PyTorch.

pytorch-seq2seq Documentation This is a framework for sequence-to-sequence (seq2seq) models implemented in PyTorch. The framework has modularized and

International Business Machines 1.4k Jan 02, 2023
Library for Russian imprecise rhymes generation

TOM RHYMER Library for Russian imprecise rhymes generation. Quick Start Generate rhymes by any given rhyme scheme (aabb, abab, aaccbb, etc ...): from

Alexey Karnachev 6 Oct 18, 2022