rOpenSci Onboarding CRAN_Status_Badge CRAN downloads Licence minimal R version Travis-CI Build Status AppVeyor Build Status

Why use drake?

The drake R package is a workflow manager and computational engine for data science projects. Its primary objective is to keep results up to date with the code and data they come from. When it runs a project, drake detects any pre-existing output and refreshes the pieces that are outdated or missing. Not every runthrough starts from scratch, and the final answers are reproducible. With a user-friendly R-focused interface, comprehensive documentation, and extensive implicit parallel computing support, drake surpasses the analogous functionality in similar tools such as Make, remake, memoise, and knitr.

What gets done stays done.

Too many data science projects follow a Sisyphean loop:

  1. Launch the code.
  2. Wait for it to finish.
  3. Discover an issue.
  4. Restart from scratch.

But drake automatically

  1. Launches the parts that changed since the last runthrough.
  2. Skips the rest.

# Drake comes with a basic example.
load_basic_example(verbose = FALSE)

# The `my_plan` data frame lists the steps of a data analysis workflow.
# Drake's `make()` function runs the commands to build the targets
# in the correct order.

##              target                          command
## 1       '' knit('report.Rmd', quiet = TRUE)
## 2             small                      simulate(5)
## 3             large                     simulate(50)
## 4 regression1_small                      reg1(small)
## 5 regression1_large                      reg1(large)
## 6 regression2_small                      reg2(small)

# First round: drake builds all 15 targets.

## target large
## target small
## target regression1_large
## target regression1_small
## target regression2_large
## target regression2_small
## target coef_regression1_large
## target coef_regression1_small
## target coef_regression2_large
## target coef_regression2_small
## target summ_regression1_large
## target summ_regression1_small
## target summ_regression2_large
## target summ_regression2_small
## target ''

# If you change the reg2() function,
# all the regression2 targets are out of date,
# which in turn affects ''.
reg2 <- function(d){    
  d$x4 <- d$x ^ 4
  lm(y ~ x4, data = d)

# Second round: drake only rebuilds the targets
# that depend on the things you changed.

## target regression2_large
## target regression2_small
## target coef_regression2_large
## target coef_regression2_small
## target summ_regression2_large
## target summ_regression2_small
## target ''

# If nothing important changed, drake rebuilds nothing.

## All targets are already up to date.

Stay reproducible.

The R community likes to emphasize reproducibility, which one could interpret to mean scientific replicability, literate programming with knitr, or version control with git. But internal consistency is important too. Reproducibility carries the promise that your output matches the code and data it came from. Ordinarily, you might have to rerun everything from scratch just to be sure. But with drake, you can just check that all your targets are up to date.


## All targets are already up to date.

config <- drake_config(my_plan)

## character(0)

Aggressively scale up.

Not every project can complete in a single R session on your laptop. Some projects need more speed or computing power. Some require a few local processor cores, and some need large high-performance computing systems. But parallel computing is hard. Your tables and figures depend on your analysis results, and your analyses depend on your datasets, so some tasks must finish before others even begin. But drake knows what to do. Parallelism is implicit and automatic. See the parallelism vignette for all the details.

# Use the spare cores on your local machine.
make(my_plan, jobs = 4)

# Scale up to a supercomputer.
drake_batchtools_tmpl_file("slurm") #
future::plan(batchtools_slurm, template = "batchtools.slurm.tmpl", workers = 100)
make(my_plan, parallelism = "future_lapply")

The network graph allows drake to wait for dependencies.

# Change some code.
reg2 <- function(d){    
  d$x3 <- d$x ^ 3
  lm(y ~ x3, data = d)

# Plot an interactive graph.
config <- drake_config(my_plan)

Within each column above, the nodes are conditionally independent given their dependencies. Each make() walks through the columns from left to right and applies parallel processing within each column. If any nodes are already up to date, drake looks downstream to maximize the number of outdated targets in a parallelizable stage. To show the parallelizable stages of the next make() programmatically, use the parallel_stages() function.


You can choose among different versions of drake:

# Install the latest stable release from CRAN.

# Alternatively, install the development version from GitHub.


Drake has a documentation website. The reference section lists all the available functions. Here are the most important ones.

  • drake_plan(): create a workflow data frame (like my_plan).
  • make(): build your project.
  • loadd(): load one or more built targets into your R session.
  • readd(): read and return a built target.
  • drake_config(): create a master configuration list for other user-side functions.
  • vis_drake_graph(): show an interactive visual network representation of your workflow.
  • outdated(): see which targets will be built in the next make().
  • deps(): check the dependencies of a command or function.
  • failed(): list the targets that failed to build in the last make().
  • diagnose(): return the complete error log of a target that failed.

The articles below are tutorials taken from the package vignettes.

For context and history, you can listen to a full-length interview about drake in episode 22 of the R Podcast.

Help and troubleshooting

Please refer to on the GitHub page for instructions.


Bug reports, suggestions, and code are welcome. Please see Maintainers and contributors must follow this repository’s code of conduct.

Similar work

GNU Make

The original idea of a time-saving reproducible build system extends back at least as far as GNU Make, which still aids the work of data scientists as well as the original user base of complied language programmers. In fact, the name “drake” stands for “Data Frames in R for Make”. Make is used widely in reproducible research. Below are some examples from Karl Broman’s website.

There are several reasons for R users to prefer drake instead.

  • Drake already has a Make-powered parallel backend. Just run make(..., parallelism = "Makefile", jobs = 2) to enjoy most of the original benefits of Make itself.
  • Improved scalability. With Make, you must write a potentially large and cumbersome Makefile by hand. But with drake, you can use wildcard templating to automatically generate massive collections of targets with minimal code.
  • Lower overhead for light-weight tasks. For each Make target that uses R, a brand new R session must spawn. For projects with thousands of small targets, that means more time may be spent loading R sessions than doing the actual work. With make(..., parallelism = "mclapply, jobs = 4"), drake launches 4 persistent workers up front and efficiently processes the targets in R.
  • Convenient organization of output. With Make, the user must save each target as a file. Drake saves all the results for you automatically in a storr cache so you do not have to micromanage the results.


Drake overlaps with its direct predecessor, remake. In fact, drake owes its core ideas to remake and Rich Fitzjohn. Remake’s development repository lists several real-world applications. Drake surpasses remake in several important ways, including but not limited to the following.

  1. High-performance computing. Remake has no native parallel computing support. Drake, on the other hand, has a vast arsenal of parallel computing options, from local multicore computing to serious distributed computing. Thanks to future, future.batchtools, and batchtools, it is straightforward to configure a drake project for most popular job schedulers, such as SLURM, TORQUE, and the Sun/Univa Grid Engine, as well as systems contained in Docker images.
  2. A friendly interface. In remake, the user must manually write a YAML configuration file to arrange the steps of a workflow, which leads to some of the same scalability problems as Make. Drake’s data-frame-based interface and wildcard templating functionality easily generate workflows at scale.
  3. Thorough documentation. Drake contains nine vignettes, a comprehensive README, examples in the help files of user-side functions, and accessible example code that users can write with drake::example_drake().
  4. Active maintenance. Drake is actively developed and maintained, and issues are usually solved promptly.
  5. Presence on CRAN. At the time of writing, drake is available on CRAN, but remake is not.


Memoization is the strategic caching of the return values of functions. Every time a memoized function is called with a new set of arguments, the return value is saved for future use. Later, whenever the same function is called with the same arguments, the previous return value is salvaged, and the function call is skipped to save time. The memoise package is an excellent implementation of memoization in R.

However, memoization does not go far enough. In reality, the return value of a function depends not only on the function body and the arguments, but also on any nested functions and global variables, the dependencies of those dependencies, and so on upstream. Drake surpasses memoise because it uses the entire dependency network graph of a project to decide which pieces need to be rebuilt and which ones can be skipped.


Much of the R community uses knitr for reproducible research. The idea is to intersperse code chunks in an R Markdown or *.Rnw file and then generate a dynamic report that weaves together code, output, and prose. Knitr is not designed to be a serious pipeline toolkit, and it should not be the primary computational engine for medium to large data analysis projects.

  1. Knitr scales far worse than Make or remake. The whole point is to consolidate output and prose, so it deliberately lacks the essential modularity.
  2. There is no obvious high-performance computing support.
  3. While there is a way to skip chunks that are already up to date (with code chunk options cache and autodep), this functionality is not the focus of knitr. It is deactivated by default, and remake and drake are more dependable ways to skip work that is already up to date.

As in the basic example demonstrates, drake should manage the entire workflow, and any knitr reports should quickly build as targets at the very end. The strategy is analogous for knitr reports within remake projects.

Factual’s Drake

Factual’s Drake is similar in concept, but the development effort is completely unrelated to the drake R package.

Other pipeline toolkits

There are countless other successful pipeline toolkits. The drake package distinguishes itself with its R-focused approach, Tidyverse-friendly interface, and wide selection of parallel computing backends.


Many thanks to Julia Lowndes, Ben Marwick, and Peter Slaughter for reviewing drake for rOpenSci, and to Maëlle Salmon for such active involvement as the editor. Thanks also to the following people for contributing early in development.

Special thanks to Jarad Niemi, my advisor from graduate school, for first introducing me to the idea of Makefiles for research. It took several months to convince me, and I am grateful that he succeeded.