Both researchers and industry have increased their employ of machine learning in new applications with the unfaltering march of the Digital Revolution. However, without complete consideration of these rapid changes, undiscovered attack surfaces may remain open that allow bad actors to breach the security of the system, or leak sensitive information. In this work we shall investigate attacks with and against Machine Learning, starting in the application space of authentication which has observed the adoption of ML, before generalizing to any ML model application. We shall explore a multitude of attacks from ML-assisted behavioral side-channel Attacks against novel authentication systems, Random Input Attacks against the ML models of biometrics, to Membership and Attribute inference attacks against ML models which find employ in Authentication among a host of other sensitive applications. With any proposed attack, there is an obligation to define mitigation strategies. This advancement of knowledge in both attacks and defenses will make the ever-evolving landscape that is our digital world more hardy to external threats. However, in the constant arms race of security and privacy threats, the problem is far from complete, with iterative improvements to be sought on both attacks and defenses. Having not yet attained the perfect defense, they are currently flawed, paired with a tangible cost in either the usability or utility of the application. The necessity of these defenses cannot be understated with a looming threat of an attack, we also need to better understand the trade-offs required, if they are to be implemented. Specifically, we shall describe our successful efforts to rapidly recover a user's secret from observation resilient authentication schemes (ORAS), through behavioral side-channels. Explore the surprising effectiveness of uniform random inputs in breaching the security of behavioral biometric models. Dive deep into membership and attribute inference attacks to highlight the infeasibility of attribute inference due to the inability to perform strong membership inference, paired with a realigned definition of approximate attribute inference to better reflect the privacy risks of an attribute inference attacker. Finally evaluating the privacy-utility tradeoffs offered by differential privacy as a means to mitigate the prior membership and attribute inference attacks.