DaCy: The State of the Art Danish NLP pipeline using SpaCy

Overview

DaCy: A SpaCy NLP Pipeline for Danish

release version versions python versions python versions Code style: flake8

DaCy is a Danish preprocessing pipeline trained in SpaCy. At the time of writing it has achieved State-of-the-Art performance on all Benchmark tasks for Danish. This repository contains code for reproducing DaCy. To download the models use the DaNLP package (request pending), SpaCy (request pending) or downloading the project directly here.

Reproduction

the folder DaCy contains a SpaCy project which will allow for a reproduction of the results. This folder also includes the evaluation metrics on DaNE.

Usage

To load in the project using the direct download simple place the downloaded "packages" folder in your directory load the model using SpaCy:

import spacy
nlp = spacy.load("da_dacy_large_tft-0.0.0")

More explicitly from the unpacked folder it is:

nlp = spacy.load("da_dacy_large_tft-0.0.0/da_dacy_large_tft/da_dacy_large_tft-0.0.0")

Thus if you get an error you might be loading from the outer folder called da_dacy_large_tft-0.0.0 rather than the inner.

To obtains SOTA performance in lemmatization as well you should add this lemmatization pipeline as well:

import lemmy.pipe

pipe = lemmy.pipe.load('da')

# Add the component to the spaCy pipeline.
nlp.add_pipe(pipe, after='tagger')

# Lemmas can now be accessed using the `._.lemmas` attribute on the tokens.
nlp("akvariernes")[0]._.lemmas

This requires you install the package beforehand, this is done easily using:

pip install lemmy

Performance and Training

The following table show the performance on DaNE when compared to other models. Highest scores are highlighted with bold and second highest is underlined

Want to learn more about how the model was trained, check out this blog post.

Issues and Usage Q&A

To ask questions, report issues or request features ๐Ÿค” , please use the GitHub Issue Tracker. Question related to SpaCy is referred to the SpaCy GitHub or forum.

Acknowledgements

This is really an acknowledgement of great open-source software and contributors. This wouldn't have been possible with the work by the SpaCy team which developed an integrated the software. Huggingface for developing Transformers and making model sharing convenient. BotXO for training and sharing the Danish BERT model and Malte Bertelsen for making it easily available. DaNLP has made it extremely easy to get access to Danish resources to train on and even supplied some of the tagged data themselves and does a great job of actually developing these datasets.

References

If you use this library in your research, please kindly cite:

@inproceedings{enevoldsen2020dacy,
    title={DaCy: A SpaCy NLP Pipeline for Danish},
    author={Enevoldsen, Kenneth},
    year={2021}
}

LICENSE

DaCy is released under the Apache License, Version 2.0. See the LICENSE file for more details.

Comments
  • Make cache dir configurable

    Make cache dir configurable

    I would like to make the default cache dir configurable with an environmental variable. This is a simple PR to allow one to do that with the variable DACY_CACHE_DIR.

    opened by dhpollack 9
  • Remove protobuf dependency

    Remove protobuf dependency

    dacy has a very tight version bound on some auxiliary libraries like protobuf. It's not apparent why this is required as it does not appear to be a library used internally, but it could of course be intentional. But the version is lagging enough that it is starting to cause compatibility problems with other libraries, so if it can be relaxed that would be very helpful.

    enhancement 
    opened by Bonnevie 4
  • Add Tutorials:

    Add Tutorials: "Extracting text statistics and readability metrics using DaCy and Textdescriptives"

    After removing readability it would be nice with a tutorial on: "Extracting text statistics and readability metrics using DaCy and Textdescriptives"

    Potentially using the packages to describe the examining the language complexity between conversational data and legal documents on DAGW or a similar task using a publicly available dataset.

    enhancement 
    opened by KennethEnevoldsen 4
  • loosen requirements

    loosen requirements

    The requirements of this package are unnecessarily strict. Specifically, I am having issues with tqdm. I have a more in-depth explaination in the issue that I create centre-for-humanities-computing/DaCy#75. There are also a few optimizations to your setup.py file. I notice that the requirements.txt file is not used, which could cause a mismatch when doing pip install -r requirements.txt and pip install .

    opened by dhpollack 4
  • ContextualVersionConflict Traceback (most recent call last)

    ContextualVersionConflict Traceback (most recent call last)

    Moved from #133, originally posted by @EaLindhardt

    I've tried to download dacy through anaconda, both with pip and conda install and the different ways of installing: https://centre-for-humanities-computing.github.io/DaCy/installation.html

    when running

    import dacy

    i get the following

    `--------------------------------------------------------------------------- ContextualVersionConflict Traceback (most recent call last) Input In [14], in <cell line: 1>() ----> 1 import dacy

    File ~\AppData\Roaming\Python\Python39\site-packages\dacy_init_.py:4, in 1 from dacy.hate_speech import make_offensive_transformer # noqa 2 from dacy.sentiment import make_emotion_transformer # noqa ----> 4 from .about import download_url, title, version # noqa 5 from .download import download_model # noqa 6 from .load import load, models, where_is_my_dacy

    File ~\AppData\Roaming\Python\Python39\site-packages\dacy\about.py:3, in 1 import pkg_resources ----> 3 version = pkg_resources.get_distribution("dacy").version 4 title = "dacy" 5 download_url = "https://github.com/centre-for-humanities-computing/DaCy"

    File ~\Anaconda3\lib\site-packages\pkg_resources_init_.py:477, in get_distribution(dist) 475 dist = Requirement.parse(dist) 476 if isinstance(dist, Requirement): --> 477 dist = get_provider(dist) 478 if not isinstance(dist, Distribution): 479 raise TypeError("Expected string, Requirement, or Distribution", dist)

    File ~\Anaconda3\lib\site-packages\pkg_resources_init_.py:353, in get_provider(moduleOrReq) 351 """Return an IResourceProvider for the named module or requirement""" 352 if isinstance(moduleOrReq, Requirement): --> 353 return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0] 354 try: 355 module = sys.modules[moduleOrReq]

    File ~\Anaconda3\lib\site-packages\pkg_resources_init_.py:897, in WorkingSet.require(self, *requirements) 888 def require(self, *requirements): 889 """Ensure that distributions matching requirements are activated 890 891 requirements must be a string or a (possibly-nested) sequence (...) 895 included, even if they were already activated in this working set. 896 """ --> 897 needed = self.resolve(parse_requirements(requirements)) 899 for dist in needed: 900 self.add(dist)

    File ~\Anaconda3\lib\site-packages\pkg_resources_init_.py:788, in WorkingSet.resolve(self, requirements, env, installer, replace_conflicting, extras) 785 if dist not in req: 786 # Oops, the "best" so far conflicts with a dependency 787 dependent_req = required_by[req] --> 788 raise VersionConflict(dist, req).with_context(dependent_req) 790 # push the new requirements onto the stack 791 new_requirements = dist.requires(req.extras)[::-1]

    ContextualVersionConflict: (spacy 3.3.1 (c:\users\au576018\anaconda3\lib\site-packages), Requirement.parse('spacy<3.3.0,>=3.2.0'), {'dacy'})`

    How do I solve this?

    @EaLindhardt will you please add the following information:

    • DaCy Version Used:
    • Operating System:
    • Python Version Used:
    • spaCy Version Used:
    • Environment Information:

    you can also type python -m spacy info --markdown and copy-paste the result here along with the DaCy version, which you can get using python -c "import dacy; print(dacy.__version__)"

    bug Stale 
    opened by KennethEnevoldsen 3
  • Update WandbLogger in configs to v2

    Update WandbLogger in configs to v2

    Update WandbLogger in configs to v2. This version has the same experiment tracking features as v1 but also has model checkpointing and dataset versioning possibilities.

    opened by scottire 3
  • Augmentation

    Augmentation

    • [x] Entity augmentation
      • [x] Gender augmentation (awareness of gender)
      • [x] Second order person augmentation (Lastname, Firstname)
      • [ ] Usernames (autogenerates e.g. WhiteTruffle101 or Kenneth Enevoldsen -> KennethEnevoldsen)
    • [ ] Mispellings Augmentations, se e.g. this repo
      • [x] Keystroke error based on keyboard distance
    • [ ] Historic augmentations
      • [x] ย  รฆ->ae, รฅ -> aa (and a), รธ->oe
      • [ ] uppercasing of nouns
    • [ ] Social media
      • [ ] Adding hashtags augmentation
    • [ ] Others, potentially see this tweet or this kaggle summary
    enhancement 
    opened by KennethEnevoldsen 3
  • :arrow_up: Update sphinxext-opengraph requirement from <0.7.0,>=0.6.3 to >=0.6.3,<0.8.0

    :arrow_up: Update sphinxext-opengraph requirement from <0.7.0,>=0.6.3 to >=0.6.3,<0.8.0

    Updates the requirements on sphinxext-opengraph to permit the latest version.

    Release notes

    Sourced from sphinxext-opengraph's releases.

    v0.7.4

    What's Changed

    New Contributors

    Full Changelog: https://github.com/wpilibsuite/sphinxext-opengraph/compare/v0.7.3...v0.7.4

    Commits

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies python 
    opened by dependabot[bot] 2
  • :arrow_up: Bump schneegans/dynamic-badges-action from 1.2.0 to 1.3.0

    :arrow_up: Bump schneegans/dynamic-badges-action from 1.2.0 to 1.3.0

    Bumps schneegans/dynamic-badges-action from 1.2.0 to 1.3.0.

    Release notes

    Sourced from schneegans/dynamic-badges-action's releases.

    Dynamic Badges v1.3.0

    This release adds the possibility to auto-generate the badge color. You can read the full changelog.

    Changelog

    Sourced from schneegans/dynamic-badges-action's changelog.

    Dynamic Badges Action 1.3.0

    Release Date: 2022-04-18

    Changes

    • Added the possibility to generate the badge color automatically between red and green based on a numerical value and its bounds. Thanks to @โ€‹LucasWolfgang for this contribution!
    Commits
    • a6775a6 :memo: Add changelog entry
    • 7ce4e74 :wrench: USe color range for example badge
    • a3f7e7f :memo: Improve documentation
    • 6511e52 :memo: Tweak documentation
    • e43bdee :sparkles: Tweak formatting of the code
    • 3dd7c22 :sparkles: Apply clang-format
    • ee32073 :wrench: Fix typo
    • 9bce11b :Thanks again! : Merge pull request #11 from LucasWolfgang/master
    • 53c821a :tada: Added saturation and lightness parameters
    • 6363528 :tada: Added saturation and lightness parameters
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies github_actions 
    opened by dependabot[bot] 2
  • Address cuda warnings and spaCy version warning.

    Address cuda warnings and spaCy version warning.

    When running:

    import dacy
    
    for model in dacy.models():
        print(model)
    
    dacy_nlp = dacy.load('medium')
    
    doc = dacy_nlp("DaCy er en hurtig og effektiv pipeline til dansk sprogprocessering bygget i SpaCy.")
    
    print('hej')
    

    I get the following warning:

    
    da_dacy_small_tft-0.0.0
    da_dacy_medium_tft-0.0.0
    da_dacy_large_tft-0.0.0
    da_dacy_small_trf-0.1.0
    da_dacy_medium_trf-0.1.0
    da_dacy_large_trf-0.1.0
    /venv/lib/python3.9/site-packages/spacy/util.py:833: UserWarning: [W095] Model 'da_dacy_medium_trf' (0.1.0) was trained with spaCy v3.1 and may not be 100% compatible with the current version (3.2.4). If you see errors or degraded performance, download a newer compatible model or retrain your custom model with the current spaCy version. For more details and available updates, run: python -m spacy validate
      warnings.warn(warn_msg)
    /venv/lib/python3.9/site-packages/spacy/util.py:833: UserWarning: [W095] Model 'da_dacy_small_trf' (0.1.0) was trained with spaCy v3.1 and may not be 100% compatible with the current version (3.2.4). If you see errors or degraded performance, download a newer compatible model or retrain your custom model with the current spaCy version. For more details and available updates, run: python -m spacy validate
      warnings.warn(warn_msg)
    /venv/lib/python3.9/site-packages/spacy_transformers/pipeline_component.py:406: UserWarning: Automatically converting a transformer component from spacy-transformers v1.0 to v1.1+. If you see errors or degraded performance, download a newer compatible model or retrain your custom model with the current spacy-transformers version. For more details and available updates, run: python -m spacy validate
      warnings.warn(warn_msg)
    /venv/lib/python3.9/site-packages/torch/amp/autocast_mode.py:198: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
      warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
    /venv/lib/python3.9/site-packages/spacy/pipeline/attributeruler.py:150: UserWarning: [W036] The component 'matcher' does not have any patterns defined.
      matches = self.matcher(doc, allow_missing=True, as_spans=False)
    hej
    

    Notably this this includes three warning, including SpaCy version, cuda device and matcher object (see also #72)

    originally version sent to me by mail

    Note: While this is a warning there, DaCy still works as intended. The version of spaCy does not influence model performance.

    opened by KennethEnevoldsen 2
Releases(v2.3.1)
Owner
Kenneth Enevoldsen
Student and Instructor at Cognitive Science Aarhus University Student Programmer at CHCAA, Junior Waste management consultant at JHN Processor
Kenneth Enevoldsen
Code to use Augmented Shapiro Wilks Stopping, as well as code for the paper "Statistically Signifigant Stopping of Neural Network Training"

This codebase is being actively maintained, please create and issue if you have issues using it Basics All data files are included under losses and ea

Justin Terry 32 Nov 09, 2021
The entmax mapping and its loss, a family of sparse softmax alternatives.

entmax This package provides a pytorch implementation of entmax and entmax losses: a sparse family of probability mappings and corresponding loss func

DeepSPIN 330 Dec 22, 2022
CPC-big and k-means clustering for zero-resource speech processing

The CPC-big model and k-means checkpoints used in Analyzing Speaker Information in Self-Supervised Models to Improve Zero-Resource Speech Processing.

Benjamin van Niekerk 5 Nov 23, 2022
Code for the ACL 2021 paper "Structural Guidance for Transformer Language Models"

Structural Guidance for Transformer Language Models This repository accompanies the paper, Structural Guidance for Transformer Language Models, publis

International Business Machines 10 Dec 14, 2022
Torchrecipes provides a set of reproduci-able, re-usable, ready-to-run RECIPES for training different types of models, across multiple domains, on PyTorch Lightning.

Recipes are a standard, well supported set of blueprints for machine learning engineers to rapidly train models using the latest research techniques without significant engineering overhead.Specifica

Meta Research 193 Dec 28, 2022
This repository collects together basic linguistic processing data for using dataset dumps from the Common Voice project

Common Voice Utils This repository collects together basic linguistic processing data for using dataset dumps from the Common Voice project. It aims t

Francis Tyers 40 Dec 20, 2022
Search-Engine - ๐Ÿ“– AI based search engine

Search Engine AI based search engine that was trained on 25000 samples, feel free to train on up to 1.2M sample from kaggle dataset, link below StackS

Vladislav Kruglikov 2 Nov 29, 2022
Conditional probing: measuring usable information beyond a baseline

Conditional probing: measuring usable information beyond a baseline

John Hewitt 20 Dec 15, 2022
Chinese named entity recognization (bert/roberta/macbert/bert_wwm with Keras)

Chinese named entity recognization (bert/roberta/macbert/bert_wwm with Keras)

2 Jul 05, 2022
Multi-Scale Temporal Frequency Convolutional Network With Axial Attention for Speech Enhancement

MTFAA-Net Unofficial PyTorch implementation of Baidu's MTFAA-Net: "Multi-Scale Temporal Frequency Convolutional Network With Axial Attention for Speec

Shimin Zhang 87 Dec 19, 2022
๋ฌธ์žฅ๋‹จ์œ„๋กœ ๋ถ„์ ˆ๋œ ๋‚˜๋ฌด์œ„ํ‚ค ๋ฐ์ดํ„ฐ์…‹. Releases์—์„œ ๋‹ค์šด๋กœ๋“œ ๋ฐ›๊ฑฐ๋‚˜, tfds-korean์„ ํ†ตํ•ด ๋‹ค์šด๋กœ๋“œ ๋ฐ›์œผ์„ธ์š”.

Namuwiki corpus ๋ฌธ์žฅ๋‹จ์œ„๋กœ ๋ฏธ๋ฆฌ ๋ถ„์ ˆ๋œ ๋‚˜๋ฌด์œ„ํ‚ค ์ฝ”ํผ์Šค. ๋ชฉ์ ์ด LM๋“ฑ์—์„œ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•œ ๋ฐ์ดํ„ฐ์…‹์ด๋ผ, ๋งํฌ/์ด๋ฏธ์ง€/ํ…Œ์ด๋ธ” ๋“ฑ๋“ฑ์ด ์ž˜๋ ค์žˆ์Šต๋‹ˆ๋‹ค. ๋ฌธ์žฅ ๋‹จ์œ„ ๋ถ„์ ˆ์€ kss๋ฅผ ํ™œ์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค. ๋ผ์ด์„ ์Šค๋Š” ๋‚˜๋ฌด์œ„ํ‚ค์— ๋ช…์‹œ๋œ ๋ฐ”์™€ ๊ฐ™์ด CC BY-NC-SA 2.0

Jeong Ukjae 16 Apr 02, 2022
LOT: A Benchmark for Evaluating Chinese Long Text Understanding and Generation

LOT: A Benchmark for Evaluating Chinese Long Text Understanding and Generation Tasks | Datasets | LongLM | Baselines | Paper Introduction LOT is a ben

46 Dec 28, 2022
Use Google's BERT for named entity recognition ๏ผˆCoNLL-2003 as the dataset๏ผ‰.

For better performance, you can try NLPGNN, see NLPGNN for more details. BERT-NER Version 2 Use Google's BERT for named entity recognition ๏ผˆCoNLL-2003

Kaiyinzhou 1.2k Dec 26, 2022
PRAnCER is a web platform that enables the rapid annotation of medical terms within clinical notes.

PRAnCER (Platform enabling Rapid Annotation for Clinical Entity Recognition) is a web platform that enables the rapid annotation of medical terms within clinical notes. A user can highlight spans of

Sontag Lab 39 Nov 14, 2022
Data loaders and abstractions for text and NLP

torchtext This repository consists of: torchtext.data: Generic data loaders, abstractions, and iterators for text (including vocabulary and word vecto

3.2k Dec 30, 2022
BMInf (Big Model Inference) is a low-resource inference package for large-scale pretrained language models (PLMs).

BMInf (Big Model Inference) is a low-resource inference package for large-scale pretrained language models (PLMs).

OpenBMB 377 Jan 02, 2023
A repo for open resources & information for people to succeed in PhD in CS & career in AI / NLP

A repo for open resources & information for people to succeed in PhD in CS & career in AI / NLP

420 Dec 28, 2022
iBOT: Image BERT Pre-Training with Online Tokenizer

Image BERT Pre-Training with iBOT Official PyTorch implementation and pretrained models for paper iBOT: Image BERT Pre-Training with Online Tokenizer.

Bytedance Inc. 435 Jan 06, 2023
Transformers implementation for Fall 2021 Clinic

Installation Download miniconda3 if not already installed You can check by running typing conda in command prompt. Use conda to create an environment

Aakash Tripathi 1 Oct 28, 2021
Shellcode antivirus evasion framework

Schrodinger's Cat Schrodinger'sCat is a Shellcode antivirus evasion framework Technical principle Please visit my blog https://idiotc4t.com/ How to us

idiotc4t 27 Jul 09, 2022