Identifying anomalous observations has important business impacts across all industries. None more than in the world of fraud detection where some observations are intentionally trying to hide, which is different than most rare event problems that exist in modeling. This talk will highlight some modern approaches to anomaly detection – local outlier factors, isolation forests, and classifier adjusted density estimation (CADE). All of these techniques have foundations in places that were not originally anomaly detection. Local outlier factors are derived from k-nearest neighbors. Isolation forests have their foundation in tree based algorithms. CADE was originally designed as an improvement / variation on kernel density estimation. However, all of these have been shown to have great abilities to find anomalous observations in a data set.