Algorithm 1
From: Publishing neural networks in drug discovery might compromise training data privacy

Membership Inference Attack. This algorithm formalizes the membership inference attack game we use to evaluate the privacy of our neural networks. The attack assumes knowledge about the underlying data distribution (chemical space) \(\Pi\) from which the training dataset is sampled. Given an adversary A, a training algorithm T, and the data distribution \(\Pi\), the process involves sampling points from the data distribution, training a model on these samples, and then using the adversary to infer whether a specific data point (chemical structure) was part of the training set or not. The algorithm tests the adversary’s ability to distinguish between data points sampled from the training set and those not included, thereby evaluating potential information leakage from the model.