Semi-supervised learning considers a particular kind of missing data problem, one in which the dependent variable (label) is missing. The goal of semi-supervised learning is to construct models that improve over supervised models that disregard the unlabeled data. These models are used in cases where unlabeled data is easy to obtain or labeling is relatively expensive. Example applications are document and image classification and protein function prediction, where additional objects are often inexpensive to obtain, but labeling them is tedious or expensive. For my research into robust models for semi-supervised classification I have implemented several new and existing semi-supervised learners that have been combined in the RSSL package. This package also contains several functions to set up benchmarking and simulation studies, comparing several semi-supervised algorithms. This includes the generation of different kinds of learning curves and cross-validation results for semi-supervised and transductive learning. The goal of this work is to make reproducible research into semi-supervised methods easier for researchers and to offer simple consistent interfaces to semi-supervised models for practitioners. The package is still under development and I would like to discuss how to improve the interfaces of the models to interact more easily with other packages.