DMLC | Distributed (Deep) Machine Learning Common

DMLC is a community hosting and developing portable, scalable and reliable open source libraries for distributed machine learning. All projects are licenced under Apache 2.0. Contributors come from leading universities and companies. We are warmly welcome new contributors and projects to join it.

View in Github »

Machine Learning Libraries

Wormhole

Wormhole is a place where DMLC projects works together to provide scalable and reliable machine learning toolkits that can run on various platforms such as Hadoop Yarn, Amazon EC2, MPI, etc.

View details »

XGBoost

An optimized general purpose gradient boosting library, including Generalized Linear Model (GLM) and Gradient Boosted Decision Trees (GBDT). XGBoost is parallelized, and also be distributed and scale to Terascale data

View details »

CXXNET

A fast, concise, distributed deep learning framework. It provides ready-to-use configure files and trained models for Alexnet, Google Inception network, and others.

View details »

Minerva

A fast and flexible system for deep learning. It provides NDarray programming interface, just like Numpy. Python bindings and C++ bindings are both available. The resulting code can be run on both multi-CPU and multi-GPU.

View details »


System Components

dmlc-core

High-performance data I/O libraries for various filesystems including local disks, HDFS, Amazon S3. It also provides job launchers for Yarn, MPI, ...

View details »

ps-lite

A light weight implementation of the parameter server framework. It provides asynchronous key-value push and pull, communication-efficient data synchronization and flexible data consistency model.

View details »

rabbit

Rabit is a light weight library that provides a fault tolerant interface of Allreduce and Broadcast. The goal of rabit is to support portable , scalable and reliable distributed machine learning programs.

View details »

mshadow

MShadow is a lightweight CPU/GPU Matrix/Tensor Template Library in C++/CUDA. It aims for maximum performance and control, while also emphasize simplicity.

View details »


Major Developers (#commit > 50, in alphabet order)

tianqi

Tianqi Chen

Ph.D. Student at University of Washington

mu

Mu Li

Ph.D. Student at Carnegie Mellon University

minjie

Minjie Wang

Ph.D. Student at NYU

yutian

Yutian Li

Master's student at Stanford University