Published a paper
How can you prevent your neural network model from becoming dependant on certain features in your dataset? I helped write a paper, that's now been published, where we focused on a similar problem, but in particle physics. Here, we:
  • used an autoencoder to perform anomaly detection in physics data
  • Then, to stop the classification from being dependent on experimental error, we added an adversarial extension
  • We found we were able to make robust classifications - without being affected by systematic errors