DeepAmandine is an artificial intelligence that allows you to talk to it for hours, you won't know the difference.

Overview

DeepAmandine

This is an artificial intelligence based on GPT-3 that you can chat with, it is very nice and makes a lot of jokes. We wish you a good experience with the AGI and hope you have fun.

screen_1


Installation and usage

- To use the version on Android - v1.0-beta :

1. Installing the pre-required Python libraries :

$ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/requirements.txt && pip3 install -r requirements.txt

2. Download the executable file :

$ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/DeepAmandine-android-v1.0-beta.pyc

3. Run the executable :

$ python3 DeepAmandine-android-v1.0-beta.pyc

- To use the version on Desktop - v1.0-beta :

1. Installing the pre-required Python libraries :

$ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/requirements.txt && pip3 install -r requirements.txt

2. Download the executable file :

$ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/DeepAmandine-desktop-v1.0-beta.pyc

3. Run the executable :

$ python3 DeepAmandine-desktop-v1.0-beta.pyc

Examples of use

You can select a language to speak with our AI.

screen_1

You can select a username to chat with the AI.

screen_2

Once all the steps have been completed, you can start talking to the AI.

screen_3

You might also like...
Text to speech is a process to convert any text into voice. Text to speech project takes words on digital devices and convert them into audio. Here I have used Google-text-to-speech library popularly known as gTTS library to convert text file to .mp3 file. Hope you like my project! simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.
simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.

Quickly train T5 models in just 3 lines of code + ONNX support simpleT5 is built on top of PyTorch-lightning ⚡️ and Transformers 🤗 that lets you quic

pytorch implementation of Attention is all you need

A Pytorch Implementation of the Transformer: Attention Is All You Need Our implementation is largely based on Tensorflow implementation Requirements N

A PyTorch implementation of the Transformer model in
A PyTorch implementation of the Transformer model in "Attention is All You Need".

Attention is all you need: A Pytorch Implementation This is a PyTorch implementation of the Transformer model in "Attention is All You Need" (Ashish V

Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSE

smaller-LaBSE LaBSE(Language-agnostic BERT Sentence Embedding) is a very good method to get sentence embeddings across languages. But it is hard to fi

I can help you convert your images to pdf file.
I can help you convert your images to pdf file.

IMAGE TO PDF CONVERTER BOT Configs TOKEN - Get bot token from @BotFather API_ID - From my.telegram.org API_HASH - From my.telegram.org Deploy to Herok

A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.
A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.

multitask-learning-transformers A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You

Rhythm-Finder is a unsupervised ML driven python powered web-application that can find the songs that suits you.
Releases(v1.0-beta)
  • v1.0-beta(Jan 15, 2022)

    DeepAmandine

    This is an artificial intelligence based on GPT-3 that you can chat with, it is very nice and makes a lot of jokes.


    - To use the version on Android - v1.0-beta :

    1. Installing the pre-required Python libraries :

    $ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/requirements.txt && pip3 install -r requirements.txt
    

    2. Download the executable file :

    $ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/DeepAmandine-android-v1.0-beta.pyc
    

    3. Run the executable :

    $ python3 DeepAmandine-android-v1.0-beta.pyc
    

    - To use the version on Desktop - v1.0-beta :

    1. Installing the pre-required Python libraries :

    $ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/requirements.txt && pip3 install -r requirements.txt
    

    2. Download the executable file :

    $ wget https://github.com/BuyWithCrypto/deep-amandine/releases/download/v1.0-beta/DeepAmandine-desktop-v1.0-beta.pyc
    

    3. Run the executable :

    $ python3 DeepAmandine-desktop-v1.0-beta.pyc
    
    Source code(tar.gz)
    Source code(zip)
    DeepAmandine-android-v1.0-beta.pyc(3.17 KB)
    DeepAmandine-desktop-v1.0-beta.pyc(3.43 KB)
    requirements.txt(13 bytes)
Owner
BuyWithCrypto
Blockchain & Fintech
BuyWithCrypto
FactSumm: Factual Consistency Scorer for Abstractive Summarization

FactSumm: Factual Consistency Scorer for Abstractive Summarization FactSumm is a toolkit that scores Factualy Consistency for Abstract Summarization W

devfon 83 Jan 09, 2023
Pre-training with Extracted Gap-sentences for Abstractive SUmmarization Sequence-to-sequence models

PEGASUS library Pre-training with Extracted Gap-sentences for Abstractive SUmmarization Sequence-to-sequence models, or PEGASUS, uses self-supervised

Google Research 1.4k Dec 22, 2022
GSoC'2021 | TensorFlow implementation of Wav2Vec2

GSoC'2021 | TensorFlow implementation of Wav2Vec2

Vasudev Gupta 73 Nov 28, 2022
DziriBERT: a Pre-trained Language Model for the Algerian Dialect

DziriBERT is the first Transformer-based Language Model that has been pre-trained specifically for the Algerian Dialect.

117 Jan 07, 2023
MPNet: Masked and Permuted Pre-training for Language Understanding

MPNet MPNet: Masked and Permuted Pre-training for Language Understanding, by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu, is a novel pre-tr

Microsoft 228 Nov 21, 2022
Codes for processing meeting summarization datasets AMI and ICSI.

Meeting Summarization Dataset Meeting plays an essential part in our daily life, which allows us to share information and collaborate with others. Wit

xcfeng 39 Dec 14, 2022
Python port of Google's libphonenumber

phonenumbers Python Library This is a Python port of Google's libphonenumber library It supports Python 2.5-2.7 and Python 3.x (in the same codebase,

David Drysdale 3.1k Dec 29, 2022
PG-19 Language Modelling Benchmark

PG-19 Language Modelling Benchmark This repository contains the PG-19 language modeling benchmark. It includes a set of books extracted from the Proje

DeepMind 161 Oct 30, 2022
This is a GUI program that will generate a word search puzzle image

Word Search Puzzle Generator Table of Contents About The Project Built With Getting Started Prerequisites Installation Usage Roadmap Contributing Cont

11 Feb 22, 2022
Unet-TTS: Improving Unseen Speaker and Style Transfer in One-shot Voice Cloning

Unet-TTS: Improving Unseen Speaker and Style Transfer in One-shot Voice Cloning English | 中文 ❗ Now we provide inferencing code and pre-training models

164 Jan 02, 2023
An A-SOUL Text Generator Based on CPM-Distill.

ASOUL-Generator-Backend 本项目为 https://asoul.infedg.xyz/ 的后端。 模型为基于 CPM-Distill 的 transformers 转化版本 CPM-Generate-distill 训练而成。

infinityedge 46 Dec 11, 2022
BiQE: Code and dataset for the BiQE paper

BiQE: Bidirectional Query Embedding This repository includes code for BiQE and the datasets introduced in Answering Complex Queries in Knowledge Graph

Bhushan Kotnis 1 Oct 20, 2021
State-of-the-art NLP through transformer models in a modular design and consistent APIs.

Trapper (Transformers wRAPPER) Trapper is an NLP library that aims to make it easier to train transformer based models on downstream tasks. It wraps h

Open Business Software Solutions 42 Sep 21, 2022
The Internet Archive Research Assistant - Daily search Internet Archive for new items matching your keywords

The Internet Archive Research Assistant - Daily search Internet Archive for new items matching your keywords

Kay Savetz 60 Dec 25, 2022
German Text-To-Speech Engine using Tacotron and Griffin-Lim

jotts JoTTS is a German text-to-speech engine using tacotron and griffin-lim. The synthesizer model has been trained on my voice using Tacotron1. Due

padmalcom 6 Aug 28, 2022
Spokestack is a library that allows a user to easily incorporate a voice interface into any Python application with a focus on embedded systems.

Welcome to Spokestack Python! This library is intended for developing voice interfaces in Python. This can include anything from Raspberry Pi applicat

Spokestack 133 Sep 20, 2022
Coreference resolution for English, French, German and Polish, optimised for limited training data and easily extensible for further languages

Coreferee Author: Richard Paul Hudson, Explosion AI 1. Introduction 1.1 The basic idea 1.2 Getting started 1.2.1 English 1.2.2 French 1.2.3 German 1.2

Explosion 70 Dec 12, 2022
A python project made to generate code using either OpenAI's codex or GPT-J (Although not as good as codex)

CodeJ A python project made to generate code using either OpenAI's codex or GPT-J (Although not as good as codex) Install requirements pip install -r

TheProtagonist 1 Dec 06, 2021
Original implementation of the pooling method introduced in "Speaker embeddings by modeling channel-wise correlations"

Speaker-Embeddings-Correlation-Pooling This is the original implementation of the pooling method introduced in "Speaker embeddings by modeling channel

Themos Stafylakis 10 Apr 30, 2022
Code repository of the paper Neural circuit policies enabling auditable autonomy published in Nature Machine Intelligence

Code repository of the paper Neural circuit policies enabling auditable autonomy published in Nature Machine Intelligence

9 Jan 08, 2023