MetaDrive: Composing Diverse Scenarios for Generalizable Reinforcement Learning

Overview


MetaDrive: Composing Diverse Driving Scenarios for Generalizable RL


MetaDrive is a driving simulator with the following key features:

  • Compositional: It supports generating infinite scenes with various road maps and traffic settings for the research of generalizable RL.
  • Lightweight: It is easy to install and run. It can run up to 300 FPS on a standard PC.
  • Realistic: Accurate physics simulation and multiple sensory input including Lidar, RGB images, top-down semantic map and first-person view images.

🛠 Quick Start

Install MetaDrive via:

git clone https://github.com/decisionforce/metadrive.git
cd metadrive
pip install -e .

or

pip install metadrive-simulator

Note that the program is tested on both Linux and Windows. Some control and display issues in MacOS wait to be solved

You can verify the installation of MetaDrive via running the testing script:

# Go to a folder where no sub-folder calls metadrive
python -m metadrive.examples.profile_metadrive

Note that please do not run the above command in a folder that has a sub-folder called ./metadrive.

🚕 Examples

Run the following command to launch a simple driving scenario with auto-drive mode on. Press W, A, S, D to drive the vehicle manually.

python -m metadrive.examples.drive_in_single_agent_env

Run the following command to launch a safe driving scenario, which includes more complex obstacles and cost to be yielded.

python -m metadrive.examples.drive_in_safe_metadrive_env

You can also launch an instance of Multi-Agent scenario as follows

python -m metadrive.examples.drive_in_multi_agent_env --env roundabout

or launch and render in pygame front end

python -m metadrive.examples.drive_in_multi_agent_env --pygame_render --env roundabout

env argument could be:

  • roundabout (default)
  • intersection
  • tollgate
  • bottleneck
  • parkinglot
  • pgmap

Run the example of procedural generation of a new map as:

python -m metadrive.examples.procedural_generation

Note that the above four scripts can not be ran in a headless machine. Please refer to the installation guideline in documentation for more information about how to launch runing in a headless machine.

Run the following command to draw the generated maps from procedural generation:

python -m metadrive.examples.draw_maps

To build the RL environment in python script, you can simply code in the OpenAI gym format as:

import metadrive  # Import this package to register the environment!
import gym

env = gym.make("MetaDrive-v0", config=dict(use_render=True))
# env = metadrive.MetaDriveEnv(config=dict(environment_num=100))  # Or build environment from class
env.reset()
for i in range(1000):
    obs, reward, done, info = env.step(env.action_space.sample())  # Use random policy
    env.render()
    if done:
        env.reset()
env.close()

🏫 Documentations

Find more details in: MetaDrive

📎 References

Working in Progress!

build codecov Documentation GitHub license Codacy Badge GitHub contributors

Comments
  • Reproducibility Problem

    Reproducibility Problem

    Hello,

    I am trying to create custom scenarios. For that, I created a custom map similar to vis_a_small_town.py, and I am using the drive_in_multi_agent_env.py example. The environment is defined as follow:

    env = envs[env_cls_name](
            {
                "use_render": True, # if not args.pygame_render else False,
                "manual_control": True,
                "crash_done": False,
                #"agent_policy": ManualControllableIDMPolicy, 
                "num_agents": total_agent_number,
                #"prefer_track_agent": "agent3",
                "show_fps": True, 
                "vehicle_config": {
                    "lidar": {"num_others": total_agent_number},
                    "show_lidar": False,    
                },
                "target_vehicle_configs": {"agent{}".format(i): {
                        #"spawn_lateral": i * 2,
                        "spawn_longitude": i * 10,
                        #"spawn_lane_index":0,
                        "vehicle_model": vehicle_model_list[i],
                        #"max_engine_force":1,
                        "max_speed":100,
                    }
                                               for i in range(5)}
            }
        )
    

    I have a list that consists of steering and throttle_brake values. The scenario consists of almost 1600 steps. I am assigning these values regarding step number into env.step:

    o, r, d, info=env.step({
                    'agent0': [agent0.steering, agent0.pedal], 
                    'agent1': [agent1.steering, agent1.pedal],
                    'agent2': [agent2.steering, agent2.pedal],
                    'agent3': [agent3.steering, agent3.pedal],
                    'agent4': [agent4.steering, agent4.pedal],
                    'agent5': [agent5.steering, agent5.pedal],
                    'agent6': [agent6.steering, agent6.pedal],
                    'agent7': [agent7.steering, agent7.pedal],
                    'agent8': [agent8.steering, agent8.pedal],
                    'agent9': [agent9.steering, agent9.pedal],
                    }
                )
    

    At the end of the list for vehicle commands, the counter for loop set the zero, and commands are reused. Also, vehicles' locations, speeds, and headings are set to initial values stored in the dictionary:

    def initilize_vehicles(env):
    
        global total_agent_number
    
        for i in range (total_agent_number):
            agent_str = "agent" + str(i)
            env.vehicles[agent_str].set_heading_theta(vehicles_initial_values[agent_str]['initial_heading_theta'])
            env.vehicles[agent_str].set_position([vehicles_initial_values[agent_str]['initial_position_x'],vehicles_initial_values[agent_str]['initial_position_y']]) # it is x,y from the first block of the map
            env.vehicles[agent_str].set_velocity(env.vehicles[agent_str].velocity_direction, vehicles_initial_values[agent_str]['initial_velocity']) 
    

    I want to reproduce the scenario and test my main algorithm. However, the problem is vehicles do not act in the same way in every run of scenario. I checked my commands for vehicle using:

    env.vehicles["agent0"].steering,env.vehicles["agent0"].throttle_brake

    Vehicle commands are the same for each repetition of scenarios.

    When I don't use a loop and start the MetaDrive from the terminal, I mostly see the same action from cars. I tested almost 10 times. But in loop case, cars start to act differently after the first loop.

    Reproducibility is a huge concern for me. Is it something about the physic engine? Are there any configuration parameters for the engine?

    Thanks!!

    opened by BedirhanKeskin 9
  • Add more description for Waymo dataset

    Add more description for Waymo dataset

    What changes do you make in this PR?

    • Please describe why you create this PR

    Checklist

    • [ ] I have merged the latest main branch into current branch.
    • [ ] I have run bash scripts/format.sh before merging.
    • Please use "squash and merge" mode.
    opened by pengzhenghao 6
  • Constant FPS mode

    Constant FPS mode

    Is there a way to set constant fps mode? I tried env.engine.force_fps.toggle(): then env.engine.force_fps.fps is showing 50 but visualisation is showing 10-16 fps in the top right corner. Is there any other way? Thanks in advance!

    opened by bbenja 6
  • What is neighbours_distance ?

    What is neighbours_distance ?

    Hello, What is the neighbours_distance and difference with the distance definition inside Lidar? They are inside MULTI_AGENT_METADRIVE_DEFAULT_CONFIG I guess the unit is in meters? `` Ekran görüntüsü 2022-03-31 114715

    opened by BedirhanKeskin 5
  • I encountered an error at an unknown location during runtime. Hello

    I encountered an error at an unknown location during runtime. Hello

    Successfully registered the following environments: ['MetaDrive-validation-v0', 'MetaDrive-10env-v0', 'MetaDrive-100envs-v0', 'MetaDrive-1000envs-v0', 'SafeMetaDrive-validation-v0', 'SafeMetaDrive-10env-v0', 'SafeMetaDrive-100envs-v0', 'SafeMetaDrive-1000envs-v0', 'MARLTollgate-v0', 'MARLBottleneck-v0', 'MARLRoundabout-v0', 'MARLIntersection-v0', 'MARLParkingLot-v0', 'MARLMetaDrive-v0']. Known pipe types: wglGraphicsPipe (all display modules loaded.)

    opened by shushushulian 4
  • about panda3d

    about panda3d

    when i run "python -m metadrive.examples.drive_in_safe_metadrive_env", set use_render=true the output: Successfully registered the following environments: ['MetaDrive-validation-v0', 'MetaDrive-10env-v0', 'MetaDrive-100envs-v0', 'MetaDrive-1000envs-v0', 'SafeMetaDrive-validation-v0', 'SafeMetaDrive-10env-v0', 'SafeMetaDrive-100envs-v0', 'SafeMetaDrive-1000envs-v0', 'MARLTollgate-v0', 'MARLBottleneck-v0', 'MARLRoundabout-v0', 'MARLIntersection-v0', 'MARLParkingLot-v0', 'MARLMetaDrive-v0']. Known pipe types: glxGraphicsPipe (1 aux display modules not yet loaded.)

    opened by benicioolee 4
  • RGB Camera returns time-buffered grayscale images

    RGB Camera returns time-buffered grayscale images

    Hi, I am running a vanilla MetaDriveEnv with the rgb camera sensor.

    veh_config = dict(
        image_source="rgb_camera",
        rgb_camera=(IMG_DIM, IMG_DIM))
    

    I wanted to see the images the sensor was producing, so was saving a few of them: I am using: from PIL import Image

    action = np.array([0,0])
    obs, reward, done, info = env.step(action)
    img = Image.fromarray(np.array(obs['image']*256,np.uint8))
    img.save(f"test.jpeg")
    

    I noticed that the images all looked grayscale. And upon further inspection I found the following behavior:: Suppose we want (N,N) images, which should be represented as arrays of size (N,N,3). Step 0: image[:,:,0] = zeros(N,N) ; image[:,:,1] = zeros(N,N) ; image[:,:,2] = zeros(N,N) Step 1: image[:,:,0] = zeros(N,N) ; image[:,:,1] = zeros(N,N) ; image[:,:,2] = m1 Step 2: image[:,:,0] = zeros(N,N) ; image[:,:,1] = m1 ; image[:,:,2] = m2 Step 3: image[:,:,0] = m1 ; image[:,:,1] = m2 ; image[:,:,2] = m3

    where m1, m2, m3 are (N,N) matrices.

    So, the images are in reality displaying 3 different timesteps with the color channels taking the time info (R=t-2, G=t-1, B=t) . That is why the images look mostly gray, since the values are identical almost everywhere – except where we expect some movement (contours) where we see that lines look colorful and strange.

    Apologies if this is expected behavior, and I just had some configuration incorrect.

    image

    opened by EdAlexAguilar 4
  • Fix close and reset issue

    Fix close and reset issue

    What changes do you make in this PR?

    • Please describe why you create this PR

    close #191

    Checklist

    • [x] I have merged the latest main branch into current branch.
    • [x] I have run bash scripts/format.sh before merging.
    • Please use "squash and merge" mode.
    opened by pengzhenghao 4
  • Selection of parameter in Rllib training for SAC agent in MetaDriveEnv and SafeMetaDriveEnv

    Selection of parameter in Rllib training for SAC agent in MetaDriveEnv and SafeMetaDriveEnv

    What is the proper buffer size / batch size / entropy coefficient for SAC to reproduce the results? I find it hard to reproduce results in SafeMetaDriveEnv. In https://arxiv.org/pdf/2109.12674.pdf, does the reported success rate of SAÇ in Table1 refer to the training success rate (and no collision, i.e. safe_rl_env=True)?

    opened by HenryLHH 4
  • Suggestion to run multiple instance in parallel?

    Suggestion to run multiple instance in parallel?

    First of all I would like to express my gratitude on this great project. I really like the feature-rich and lightweight nature of MetaDrive as a driving simulator for reinforcement learning.

    I am wondering what is the recommended way to run multiple MetaDrive instances in parallel (each one with a single ego-car agent)? This seem to be a common use case for reinforcement learning training. I am currently running a batch of MetaDrive simulator with each of them in wrapped in a process, which does seem to have overheads of extra resource and communication/synchronization.

    Another problem I encountered when running multiple instances (say, 60 instances on single machine) in their own process is that I will get a lot of warning like this:

    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    
    ALSA lib pulse.c:242:(pulse_connect) PulseAudio: Unable to connect: Connection terminated
    

    I guess this has something to do with audio. This happens even though I am running the TopDown environment, which should not involve sound. Did you see those warning when running multiple instances as well?

    Also, is there a plan to have a vectorized batch version?

    Thanks!

    opened by breakds 4
  • Rendering FPS of example script is too low

    Rendering FPS of example script is too low

    When I run python -m metadrive.examples.drive_in_single_agent_env I found the fps is about 4 fps I used the nvidia-smi command and found that my 2060gpu was not occupied.

    I also can find some warnings about: WARNING:root: It seems you don't install our cython utilities yet! Please reinstall MetaDrive via .........

    opened by feidieufo 4
  • Errors when running metadrive.tests.scripts.generate_video_for_image_obs

    Errors when running metadrive.tests.scripts.generate_video_for_image_obs

    In metadrive directory, I ran python -m metadrive.tests.scripts.generate_video_for_image_obs , then it reported an error as below: python -m metadrive.tests.scripts.generate_video_for_image_obs

    Successfully registered the following environments: ['MetaDrive-validation-v0', 'MetaDrive-10env-v0', 'MetaDrive-100envs-v0', 'MetaDrive-1000envs-v0', 'SafeMetaDrive-validation-v0', 'SafeMetaDrive-10env-v0', 'SafeMetaDrive-100envs-v0', 'SafeMetaDrive-1000envs-v0', 'MARLTollgate-v0', 'MARLBottleneck-v0', 'MARLRoundabout-v0', 'MARLIntersection-v0', 'MARLParkingLot-v0', 'MARLMetaDrive-v0']. :display(warning): Unable to load libpandagles2.so: No error. Known pipe types: (all display modules loaded.) Traceback (most recent call last): File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/queenie/Documents/metadrive/metadrive/tests/scripts/generate_video_for_image_obs.py", line 157, in env.reset() File "/Users/queenie/Documents/metadrive/metadrive/envs/base_env.py", line 333, in reset self.lazy_init() # it only works the first time when reset() is called to avoid the error when render File "/Users/queenie/Documents/metadrive/metadrive/envs/base_env.py", line 234, in lazy_init self.engine = initialize_engine(self.config) File "/Users/queenie/Documents/metadrive/metadrive/engine/engine_utils.py", line 11, in initialize_engine cls.singleton = cls(env_global_config) File "/Users/queenie/Documents/metadrive/metadrive/engine/base_engine.py", line 28, in init EngineCore.init(self, global_config) File "/Users/queenie/Documents/metadrive/metadrive/engine/core/engine_core.py", line 135, in init super(EngineCore, self).init(windowType=self.mode) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 339, in init self.openDefaultWindow(startDirect = False, props=props) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 1021, in openDefaultWindow self.openMainWindow(*args, **kw) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 1056, in openMainWindow self.openWindow(*args, **kw) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 766, in openWindow win = func() File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 752, in callbackWindowDict = callbackWindowDict) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 818, in _doOpenWindow self.makeDefaultPipe() File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 648, in makeDefaultPipe "No graphics pipe is available!\n" File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/directnotify/Notifier.py", line 130, in error raise exception(errorString) Exception: No graphics pipe is available! Your Config.prc file must name at least one valid panda display library via load-display or aux-display. (drivemeta) ➜ metadrive git:(main) python -m metadrive.tests.scripts.generate_video_for_image_obs Successfully registered the following environments: ['MetaDrive-validation-v0', 'MetaDrive-10env-v0', 'MetaDrive-100envs-v0', 'MetaDrive-1000envs-v0', 'SafeMetaDrive-validation-v0', 'SafeMetaDrive-10env-v0', 'SafeMetaDrive-100envs-v0', 'SafeMetaDrive-1000envs-v0', 'MARLTollgate-v0', 'MARLBottleneck-v0', 'MARLRoundabout-v0', 'MARLIntersection-v0', 'MARLParkingLot-v0', 'MARLMetaDrive-v0']. :display(warning): Unable to load libpandagles2.so: No error. Known pipe types: (all display modules loaded.) Traceback (most recent call last): File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Users/queenie/Documents/metadrive/metadrive/tests/scripts/generate_video_for_image_obs.py", line 157, in env.reset() File "/Users/queenie/Documents/metadrive/metadrive/envs/base_env.py", line 333, in reset self.lazy_init() # it only works the first time when reset() is called to avoid the error when render File "/Users/queenie/Documents/metadrive/metadrive/envs/base_env.py", line 234, in lazy_init self.engine = initialize_engine(self.config) File "/Users/queenie/Documents/metadrive/metadrive/engine/engine_utils.py", line 11, in initialize_engine cls.singleton = cls(env_global_config) File "/Users/queenie/Documents/metadrive/metadrive/engine/base_engine.py", line 28, in init EngineCore.init(self, global_config) File "/Users/queenie/Documents/metadrive/metadrive/engine/core/engine_core.py", line 135, in init super(EngineCore, self).init(windowType=self.mode) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 339, in init self.openDefaultWindow(startDirect = False, props=props) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 1021, in openDefaultWindow self.openMainWindow(*args, **kw) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 1056, in openMainWindow self.openWindow(*args, **kw) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 766, in openWindow win = func() File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 752, in callbackWindowDict = callbackWindowDict) File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 818, in _doOpenWindow self.makeDefaultPipe() File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/showbase/ShowBase.py", line 648, in makeDefaultPipe "No graphics pipe is available!\n" File "/Users/queenie/anaconda3/envs/drivemeta/lib/python3.7/site-packages/direct/directnotify/Notifier.py", line 130, in error raise exception(errorString) Exception: No graphics pipe is available! Your Config.prc file must name at least one valid panda display library via load-display or aux-display.

    opened by YouSonicAI 2
  • opencv-python-headless in requirements seems to create conflict?

    opencv-python-headless in requirements seems to create conflict?

    Sometimes it has different version to opencv-python can cause issue. It is only used in top-down-rendering. Can we change this dependency to opencv-python?

    opened by pengzhenghao 0
Releases(MetaDrive-0.2.6.0)
Owner
DeciForce: Crossroads of Machine Perception and Autonomy
Research on Unifying Machine Perception and Autonomy in Zhou Group
DeciForce: Crossroads of Machine Perception and Autonomy
SwinIR: Image Restoration Using Swin Transformer

SwinIR: Image Restoration Using Swin Transformer This repository is the official PyTorch implementation of SwinIR: Image Restoration Using Shifted Win

Jingyun Liang 2.4k Jan 08, 2023
[NeurIPS 2021] The PyTorch implementation of paper "Self-Supervised Learning Disentangled Group Representation as Feature"

IP-IRM [NeurIPS 2021] The PyTorch implementation of paper "Self-Supervised Learning Disentangled Group Representation as Feature". Codes will be relea

Wang Tan 67 Dec 24, 2022
Unet network with mean teacher for altrasound image segmentation

Unet network with mean teacher for altrasound image segmentation

5 Nov 21, 2022
This a classic fintech problem that introduces real life difficulties such as data imbalance. Check out the notebook to find out more!

Credit Card Fraud Detection Introduction Online transactions have become a crucial part of any business over the years. Many of those transactions use

Jonathan Hasbani 0 Jan 20, 2022
Data and extra materials for the food safety publications classifier

Data and extra materials for the food safety publications classifier The subdirectories contain detailed descriptions of their contents in the README.

1 Jan 20, 2022
Official implementation of "Open-set Label Noise Can Improve Robustness Against Inherent Label Noise" (NeurIPS 2021)

Open-set Label Noise Can Improve Robustness Against Inherent Label Noise NeurIPS 2021: This repository is the official implementation of ODNL. Require

Hongxin Wei 12 Dec 07, 2022
Monk is a low code Deep Learning tool and a unified wrapper for Computer Vision.

Monk - A computer vision toolkit for everyone Why use Monk Issue: Want to begin learning computer vision Solution: Start with Monk's hands-on study ro

Tessellate Imaging 507 Dec 04, 2022
NeuroGen: activation optimized image synthesis for discovery neuroscience

NeuroGen: activation optimized image synthesis for discovery neuroscience NeuroGen is a framework for synthesizing images that control brain activatio

3 Aug 17, 2022
CapsuleVOS: Semi-Supervised Video Object Segmentation Using Capsule Routing

CapsuleVOS This is the code for the ICCV 2019 paper CapsuleVOS: Semi-Supervised Video Object Segmentation Using Capsule Routing. Arxiv Link: https://a

53 Oct 27, 2022
A free, multiplatform SDK for real-time facial motion capture using blendshapes, and rigid head pose in 3D space from any RGB camera, photo, or video.

mocap4face by Facemoji mocap4face by Facemoji is a free, multiplatform SDK for real-time facial motion capture based on Facial Action Coding System or

Facemoji 591 Dec 27, 2022
A Marvelous ChatBot implement using PyTorch.

PyTorch Marvelous ChatBot [Update] it's 2019 now, previously model can not catch up state-of-art now. So we just move towards the future a transformer

JinTian 223 Oct 18, 2022
Graph Transformer Architecture. Source code for

Graph Transformer Architecture Source code for the paper "A Generalization of Transformer Networks to Graphs" by Vijay Prakash Dwivedi and Xavier Bres

NTU Graph Deep Learning Lab 561 Jan 08, 2023
IsoGCN code for ICLR2021

IsoGCN The official implementation of IsoGCN, presented in the ICLR2021 paper Isometric Transformation Invariant and Equivariant Graph Convolutional N

horiem 39 Nov 25, 2022
MultiMix: Sparingly Supervised, Extreme Multitask Learning From Medical Images (ISBI 2021, MELBA 2021)

MultiMix This repository contains the implementation of MultiMix. Our publications for this project are listed below: "MultiMix: Sparingly Supervised,

Ayaan Haque 27 Dec 22, 2022
Official codes for the paper "Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech"

ResDAVEnet-VQ Official PyTorch implementation of Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech What is in this repo? M

Wei-Ning Hsu 21 Aug 23, 2022
Code release for "Conditional Adversarial Domain Adaptation" (NIPS 2018)

CDAN Code release for "Conditional Adversarial Domain Adaptation" (NIPS 2018) New version: https://github.com/thuml/Transfer-Learning-Library Dataset

THUML @ Tsinghua University 363 Dec 20, 2022
Implementation for the EMNLP 2021 paper "Interactive Machine Comprehension with Dynamic Knowledge Graphs".

Interactive Machine Comprehension with Dynamic Knowledge Graphs Implementation for the EMNLP 2021 paper. Dependencies apt-get -y update apt-get instal

Xingdi (Eric) Yuan 19 Aug 23, 2022
Code for the preprint "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"

This is a repository for the paper of "Well-classified Examples are Underestimated in Classification with Deep Neural Networks" The implementation and

LancoPKU 25 Dec 11, 2022
The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN, Project Page, supp.

PISE The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN, Project Page, supp. Requirement conda create -n pise pyt

jinszhang 110 Nov 21, 2022
Stock-Prediction - prediction of stock market movements using sentiment analysis and deep learning.

Stock-Prediction- In this project, we aim to enhance the prediction of stock market movements using sentiment analysis and deep learning. We divide th

5 Jan 25, 2022