[ICSE2020] MemLock: Memory Usage Guided Fuzzing

Overview

MemLock: Memory Usage Guided Fuzzing

MIT License

This repository provides the tool and the evaluation subjects for the paper "MemLock: Memory Usage Guided Fuzzing" accepted for the technical track at ICSE'2020. A pre-print of the paper can be found at ICSE2020_MemLock.pdf.

The repository contains three folders: tool, tests and evaluation.

Tool

MemLock is built on top of the fuzzer AFL. Check out AFL's website for more information details. We provide here a snapshot of MemLock. For simplicity, we provide shell script for the whole installation.

Requirements

  • Operating System: Ubuntu 16.04 LTS (We have tested the artifact on the Ubuntu 16.04)
  • Run the following command to install Docker (Docker version 18.09.7):
    $ sudo apt-get install docker.io
    (If you have any question on docker, you can see Docker's Documentation).
  • Run the following command to install required packages
    $ sudo apt-get install git build-essential python3 cmake tmux libtool automake autoconf autotools-dev m4 autopoint help2man bison flex texinfo zlib1g-dev libexpat1-dev libfreetype6 libfreetype6-dev

Clone the Repository

$ git clone https://github.com/wcventure/MemLock-Fuzz.git MemLock --depth=1
$ cd MemLock

Build and Run the Docker Image

Firstly, system core dumps must be disabled as with AFL.

$ echo core|sudo tee /proc/sys/kernel/core_pattern
$ echo performance|sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

Run the following command to automatically build the docker image and configure the environment.

# build docker image
$ sudo docker build -t memlock --no-cache ./

# run docker image
$ sudo docker run --cap-add=SYS_PTRACE -it memlock /bin/bash

Usage

The running command line is similar to AFL.

To perform stack memory usage guided fuzzing, run following command line after use memlock-stack-clang to compile the program, as an example shown in tests/run_test1_MemLock.sh

tool/MemLock/build/bin/memlock-stack-fuzz -i testcase_dir -o findings_dir -d -- /path/to/program @@

To perform heap memory usage guided fuzzing, run following command line after use memlock-heap-clang to compile the program, as an example shown in tests/run_test2_MemLock.sh.

tool/MemLock/build/bin/memlock-heap-fuzz -i testcase_dir -o findings_dir -d -- /path/to/program @@

Tests

Before you use MemLock fuzzer, we suggest that you first use two simple examples provided by us to determine whether the Memlock fuzzer can work normally. We show two simple examples to shows how MemLock can detect excessive memory consumption and why AFL cannot detect these bugs easily. Example 1 demonstrates an uncontrolled-recursion bug and Example 2 demonstrates an uncontrolled-memory-allocation bug.

Run for testing example 1

Example 1 demonstrates an uncontrolled-recursion bug. The function fact() in example1.c is a recursive function. With a sufficiently large recursive depth, the execution would run out of stack memory, causing stack-overflow. You can perform fuzzing on this example program by following commands.

# enter the tests folder
$ cd tests

# run testing example 1 with MemLock
$ ./run_test1_MemLock.sh

# run testing example 1 with AFL (Open another terminal)
$ ./run_test1_AFL.sh

In our experiments for testing example 1, MemLock can find crashes in a few minutes while AFL can not find any crashes.

Run for testing example 2

Example 2 demonstrates an uncontrolled-memory-allocation bug. At line 25 in example2.c, the length of the user inputs is fed directly into new []. By carefully handcrafting the input, an adversary can provide arbitrarily large values, leading to program crash (i.e., std::bad_alloc) or running out of memory. You can perform fuzzing on this example program by following commands.

# enter the tests folder
$ cd tests

# run testing example 2 with MemLock
$ ./run_test2_MemLock.sh

# run testing example 2 with AFL (Open another terminal)
$ ./run_test2_AFL.sh

In our experiments for testing example 2, MemLock can find crashes in a few minutes while AFL can not find any crashes.

Evaluation

The fold evaluation contains all our evaluation subjects. After having MemLock installed, you can run the script to build and instrument the subjects. After instrument the subjects you can run the script to perform fuzzing on the subjects.

Build Target Program

In BUILD folder, You can run the script ./build_xxx.sh. It shows how to build and instrument the subject. For example:

# build cxxfilt
$ cd BUILD
$ ./build_cxxfilt.sh

Run for Fuzzing

After instrumenting the subjects, In FUZZ folder you can run the script ./run_MemLock_cxxfilt.sh to run a MemLock fuzzer instance on program cxxfilt. If you want to compare its performance with AFL, you can open another terminal and run the script ./run_AFL_cxxfilt.sh.

# build cxxfilt
$ cd FUZZ
$ ./run_MemLock_cxxfilt.sh

Publications

@inproceedings{wen2020memlock,
Author = {Wen, Cheng and Wang, Haijun and Li, Yuekang and Qin, Shengchao and Liu, Yang, and Xu, Zhiwu and Chen, Hongxu and Xie, Xiaofei and Pu, Geguang and Liu, Ting},
Title = {MemLock: Memory Usage Guided Fuzzing},
Booktitle= {2020 IEEE/ACM 42nd International Conference on Software Engineering},
Year ={2020},
Address = {Seoul, South Korea},
}

Practical Security Impact

CVE ID Assigned By This Work (26 CVEs)

Our tools have found several security-critical vulnerabilities in widely used open-source projects and libraries, such as Binutils, Elfutils, Libtiff, Mjs.

Vulnerability Package Program Vulnerability Type
CVE-2020-36375 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-36374 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-36373 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-36372 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-36371 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-36370 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-36369 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-36368 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-36367 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-36366 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2020-18392 MJS 1.20.1 mjs CWE-674: Uncontrolled Recursion
CVE-2019-6293 Flex 2.6.4 flex CWE-674: Uncontrolled Recursion
CVE-2019-6292 Yaml-cpp v0.6.2 prase CWE-674: Uncontrolled Recursion
CVE-2019-6291 NASM 2.14.03rc1 nasm CWE-674: Uncontrolled Recursion
CVE-2019-6290 NASM 2.14.03rc1 nasm CWE-674: Uncontrolled Recursion
CVE-2018-18701 Binutils 2.31 nm CWE-674: Uncontrolled Recursion
CVE-2018-18700 Binutils 2.31 nm CWE-674: Uncontrolled Recursion
CVE-2018-18484 Binutils 2.31 c++filt CWE-674: Uncontrolled Recursion
CVE-2018-17985 Binutils 2.31 c++filt CWE-674: Uncontrolled Recursion
CVE-2019-7704 Binaryen 1.38.22 wasm-opt CWE-789: Uncontrolled Memory Allocation
CVE-2019-7698 Bento4 v1.5.1-627 mp4dump CWE-789: Uncontrolled Memory Allocation
CVE-2019-7148 Elfutils 0.175 eu-ar CWE-789: Uncontrolled Memory Allocation
CVE-2018-20652 Tinyexr v0.9.5 tinyexr CWE-789: Uncontrolled Memory Allocation
CVE-2018-18483 Binutils 2.31 c++filt CWE-789: Uncontrolled Memory Allocation
CVE-2018-20657 Binutils 2.31 c++filt CWE-401: Memory Leak
CVE-2018-20002 Binutils 2.31 nm CWE-401: Memory Leak

Video

Links

Owner
Cheng Wen
I am a Ph.D. student at Shenzhen University. My research interest is in the area of Cyber Security(SEC), Programming Language(PL), and Software Engineering(SE).
Cheng Wen
RITA is a family of autoregressive protein models, developed by LightOn in collaboration with the OATML group at Oxford and the Debora Marks Lab at Harvard.

RITA: a Study on Scaling Up Generative Protein Sequence Models RITA is a family of autoregressive protein models, developed by a collaboration of Ligh

LightOn 69 Dec 22, 2022
NHS AI Lab Skunkworks project: Long Stayer Risk Stratification

NHS AI Lab Skunkworks project: Long Stayer Risk Stratification A pilot project for the NHS AI Lab Skunkworks team, Long Stayer Risk Stratification use

NHSX 21 Nov 14, 2022
A PyTorch Implementation of "SINE: Scalable Incomplete Network Embedding" (ICDM 2018).

Scalable Incomplete Network Embedding ⠀⠀ A PyTorch implementation of Scalable Incomplete Network Embedding (ICDM 2018). Abstract Attributed network em

Benedek Rozemberczki 69 Sep 22, 2022
A Topic Modeling toolbox

Topik A Topic Modeling toolbox. Introduction The aim of topik is to provide a full suite and high-level interface for anyone interested in applying to

Anaconda, Inc. (formerly Continuum Analytics, Inc.) 93 Dec 01, 2022
A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently develop and compare their own methods.

Knodle (Knowledge-supervised Deep Learning Framework) - a new framework for weak supervision with neural networks. It provides a modularization for se

93 Nov 06, 2022
Code to use Augmented Shapiro Wilks Stopping, as well as code for the paper "Statistically Signifigant Stopping of Neural Network Training"

This codebase is being actively maintained, please create and issue if you have issues using it Basics All data files are included under losses and ea

J K Terry 32 Nov 09, 2021
Official code for paper "Optimization for Oriented Object Detection via Representation Invariance Loss".

Optimization for Oriented Object Detection via Representation Invariance Loss By Qi Ming, Zhiqiang Zhou, Lingjuan Miao, Xue Yang, and Yunpeng Dong. Th

ming71 56 Nov 28, 2022
Machine learning notebooks in different subjects optimized to run in google collaboratory

Notebooks Name Description Category Link Training pix2pix This notebook shows a simple pipeline for training pix2pix on a simple dataset. Most of the

Zaid Alyafeai 363 Dec 06, 2022
So-ViT: Mind Visual Tokens for Vision Transformer

So-ViT: Mind Visual Tokens for Vision Transformer        Introduction This repository contains the source code under PyTorch framework and models trai

Jiangtao Xie 44 Nov 24, 2022
Real-time pose estimation accelerated with NVIDIA TensorRT

trt_pose Want to detect hand poses? Check out the new trt_pose_hand project for real-time hand pose and gesture recognition! trt_pose is aimed at enab

NVIDIA AI IOT 803 Jan 06, 2023
StarGAN - Official PyTorch Implementation (CVPR 2018)

StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation

Yunjey Choi 5.1k Dec 30, 2022
A colab notebook for training Stylegan2-ada on colab, transfer learning onto your own dataset.

Stylegan2-Ada-Google-Colab-Starter-Notebook A no thrills colab notebook for training Stylegan2-ada on colab. transfer learning onto your own dataset h

Harnick Khera 66 Dec 16, 2022
Stochastic Tensor Optimization for Robot Motion - A GPU Robot Motion Toolkit

STORM Stochastic Tensor Optimization for Robot Motion - A GPU Robot Motion Toolkit [Install Instructions] [Paper] [Website] This package contains code

NVIDIA Research Projects 101 Dec 12, 2022
A Multi-attribute Controllable Generative Model for Histopathology Image Synthesis

A Multi-attribute Controllable Generative Model for Histopathology Image Synthesis This is the pytorch implementation for our MICCAI 2021 paper. A Mul

Jiarong Ye 7 Apr 04, 2022
NLMpy - A Python package to create neutral landscape models

NLMpy is a Python package for the creation of neutral landscape models that are widely used by landscape ecologists to model ecological patterns

Manaaki Whenua – Landcare Research 1 Oct 08, 2022
Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.

Smaller Multilingual Transformers This repository shares smaller versions of multilingual transformers that keep the same representations offered by t

Geotrend 79 Dec 28, 2022
You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling

You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling Transformer-based models are widely used in natural language processi

Zhanpeng Zeng 12 Jan 01, 2023
Clinica is a software platform for clinical research studies involving patients with neurological and psychiatric diseases and the acquisition of multimodal data

Clinica Software platform for clinical neuroimaging studies Homepage | Documentation | Paper | Forum | See also: AD-ML, AD-DL ClinicaDL About The Proj

ARAMIS Lab 165 Dec 29, 2022
Machine learning for NeuroImaging in Python

nilearn Nilearn enables approachable and versatile analyses of brain volumes. It provides statistical and machine-learning tools, with instructive doc

919 Dec 25, 2022
Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)

Improving Vision-and-Language Navigation with Image-Text Pairs from the Web Arjun Majumdar, Ayush Shrivastava, Stefan Lee, Peter Anderson, Devi Parikh

Arjun Majumdar 44 Dec 14, 2022