| Class | Description | 
|---|---|
| DataPartitionerSparkAggregator | |
| DataPartitionerSparkMapper | |
| DataPartitionLocalScheme | |
| DataPartitionSparkScheme | |
| DCLocalScheme | Disjoint_Contiguous data partitioner:
 for each worker, use a right indexing
 operation X[beg:end,] to obtain contiguous,
 non-overlapping partitions of rows. | 
| DCSparkScheme | Spark Disjoint_Contiguous data partitioner: | 
| DRLocalScheme | Data partitioner Disjoint_Random:
 for each worker, use a permutation multiply P[beg:end,] %*% X,
 where P is constructed for example with P=table(seq(1,nrow(X)),sample(nrow(X), nrow(X))),
 i.e., sampling without replacement to ensure disjointness. | 
| DRRLocalScheme | Disjoint_Round_Robin data partitioner:
 for each worker, use a permutation multiply
 or simpler a removeEmpty such as removeEmpty
 (target=X, margin=rows, select=(seq(1,nrow(X))%%k)==id) | 
| DRRSparkScheme | Spark Disjoint_Round_Robin data partitioner: | 
| DRSparkScheme | Spark data partitioner Disjoint_Random:
 For the current row block, find all the shifted place for each row (WorkerID => (row block ID, matrix) | 
| LocalDataPartitioner | |
| ORLocalScheme | Data partitioner Overlap_Reshuffle:
 for each worker, use a new permutation multiply P %*% X,
 where P is constructed for example with P=table(seq(1,nrow(X),sample(nrow(X), nrow(X)))) | 
| ORSparkScheme | Spark data partitioner Overlap_Reshuffle: | 
| SparkDataPartitioner | 
Copyright © 2020 The Apache Software Foundation. All rights reserved.