“if adding, modifying, or removing any of its training samples would not result in a statistically significant difference in the model parameters learned. For this reason, learning with differential privacy is, in practice, a form of regularization.”