首页 Data mining Concepts and techniques_12

Data mining Concepts and techniques_12

举报
开通vip

Data mining Concepts and techniques_12nullData Mining: Concepts and Techniques (3rd ed.) — Chapter 12 —*Data Mining: Concepts and Techniques (3rd ed.) — Chapter 12 —Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign & Simon Fraser University ©2012 Han, Kamb...

Data mining Concepts and techniques_12
nullData Mining: Concepts and Techniques (3rd ed.) — Chapter 12 —*Data Mining: Concepts and Techniques (3rd ed.) — Chapter 12 —Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign & Simon Fraser University ©2012 Han, Kamber & Pei. All rights reserved.nullChapter 12. Outlier Analysis*Chapter 12. Outlier AnalysisOutlier and Outlier Analysis Outlier Detection Methods Statistical Approaches Proximity-Base Approaches Clustering-Base Approaches Classification Approaches Mining Contextual and Collective Outliers Outlier Detection in High Dimensional Data SummaryWhat Are Outliers?*What Are Outliers?Outlier: A data object that deviates significantly from the normal objects as if it were generated by a different mechanism Ex.: Unusual credit card purchase, sports: Michael Jordon, Wayne Gretzky, ... Outliers are different from the noise data Noise is random error or variance in a measured variable Noise should be removed before outlier detection Outliers are interesting: It violates the mechanism that generates the normal data Outlier detection vs. novelty detection: early stage, outlier; but later merged into the model Applications: Credit card fraud detection Telecom fraud detection Customer segmentation Medical analysisTypes of Outliers (I)*Types of Outliers (I)Three kinds: global, contextual and collective outliers Global outlier (or point anomaly) Object is Og if it significantly deviates from the rest of the data set Ex. Intrusion detection in computer networks Issue: Find an appropriate measurement of deviation Contextual outlier (or conditional outlier) Object is Oc if it deviates significantly based on a selected context Ex. 80o F in Urbana: outlier? (depending on summer or winter?) Attributes of data objects should be divided into two groups Contextual attributes: defines the context, e.g., time & location Behavioral attributes: characteristics of the object, used in outlier evaluation, e.g., temperature Can be viewed as a generalization of local outliers—whose density significantly deviates from its local area Issue: How to define or formulate meaningful context?Global OutlierTypes of Outliers (II)*Types of Outliers (II)Collective Outliers A subset of data objects collectively deviate significantly from the whole data set, even if the individual data objects may not be outliers Applications: E.g., intrusion detection: When a number of computers keep sending denial-of-service packages to each other Collective OutlierDetection of collective outliers Consider not only behavior of individual objects, but also that of groups of objects Need to have the background knowledge on the relationship among data objects, such as a distance or similarity measure on objects. A data set may have multiple types of outlier One object may belong to more than one type of outlierChallenges of Outlier Detection*Challenges of Outlier DetectionModeling normal objects and outliers properly Hard to enumerate all possible normal behaviors in an application The border between normal and outlier objects is often a gray area Application-specific outlier detection Choice of distance measure among objects and the model of relationship among objects are often application-dependent E.g., clinic data: a small deviation could be an outlier; while in marketing analysis, larger fluctuations Handling noise in outlier detection Noise may distort the normal objects and blur the distinction between normal objects and outliers. It may help hide outliers and reduce the effectiveness of outlier detection Understandability Understand why these are outliers: Justification of the detection Specify the degree of an outlier: the unlikelihood of the object being generated by a normal mechanismChapter 12. Outlier Analysis*Chapter 12. Outlier AnalysisOutlier and Outlier Analysis Outlier Detection Methods Statistical Approaches Proximity-Base Approaches Clustering-Base Approaches Classification Approaches Mining Contextual and Collective Outliers Outlier Detection in High Dimensional Data SummaryOutlier Detection I: Supervised MethodsOutlier Detection I: Supervised MethodsTwo ways to categorize outlier detection methods: Based on whether user-labeled examples of outliers can be obtained: Supervised, semi-supervised vs. unsupervised methods Based on assumptions about normal data and outliers: Statistical, proximity-based, and clustering-based methods Outlier Detection I: Supervised Methods Modeling outlier detection as a classification problem Samples examined by domain experts used for training & testing Methods for Learning a classifier for outlier detection effectively: Model normal objects & report those not matching the model as outliers, or Model outliers and treat those not matching the model as normal Challenges Imbalanced classes, i.e., outliers are rare: Boost the outlier class and make up some artificial outliers Catch as many outliers as possible, i.e., recall is more important than accuracy (i.e., not mislabeling normal objects as outliers)*Outlier Detection II: Unsupervised Methods Outlier Detection II: Unsupervised Methods Assume the normal objects are somewhat ``clustered'‘ into multiple groups, each having some distinct features An outlier is expected to be far away from any groups of normal objects Weakness: Cannot detect collective outlier effectively Normal objects may not share any strong patterns, but the collective outliers may share high similarity in a small area Ex. In some intrusion or virus detection, normal activities are diverse Unsupervised methods may have a high false positive rate but still miss many real outliers. Supervised methods can be more effective, e.g., identify attacking some key resources Many clustering methods can be adapted for unsupervised methods Find clusters, then outliers: not belonging to any cluster Problem 1: Hard to distinguish noise from outliers Problem 2: Costly since first clustering: but far less outliers than normal objects Newer methods: tackle outliers directly*Outlier Detection III: Semi-Supervised Methods Outlier Detection III: Semi-Supervised Methods Situation: In many applications, the number of labeled data is often small: Labels could be on outliers only, normal objects only, or both Semi-supervised outlier detection: Regarded as applications of semi-supervised learning If some labeled normal objects are available Use the labeled examples and the proximate unlabeled objects to train a model for normal objects Those not fitting the model of normal objects are detected as outliers If only some labeled outliers are available, a small number of labeled outliers many not cover the possible outliers well To improve the quality of outlier detection, one can get help from models for normal objects learned from unsupervised methods *Outlier Detection (1): Statistical MethodsOutlier Detection (1): Statistical MethodsStatistical methods (also known as model-based methods) assume that the normal data follow some statistical model (a stochastic model) The data not following the model are outliers.*Effectiveness of statistical methods: highly depends on whether the assumption of statistical model holds in the real data There are rich alternatives to use various statistical models E.g., parametric vs. non-parametricExample (right figure): First use Gaussian distribution to model the normal data For each object y in region R, estimate gD(y), the probability of y fits the Gaussian distribution If gD(y) is very low, y is unlikely generated by the Gaussian model, thus an outlierOutlier Detection (2): Proximity-Based MethodsOutlier Detection (2): Proximity-Based MethodsAn object is an outlier if the nearest neighbors of the object are far away, i.e., the proximity of the object is significantly deviates from the proximity of most of the other objects in the same data set*The effectiveness of proximity-based methods highly relies on the proximity measure. In some applications, proximity or distance measures cannot be obtained easily. Often have a difficulty in finding a group of outliers which stay close to each other Two major types of proximity-based outlier detection Distance-based vs. density-basedExample (right figure): Model the proximity of an object using its 3 nearest neighbors Objects in region R are substantially different from other objects in the data set. Thus the objects in R are outliersOutlier Detection (3): Clustering-Based MethodsOutlier Detection (3): Clustering-Based MethodsNormal data belong to large and dense clusters, whereas outliers belong to small or sparse clusters, or do not belong to any clusters*Since there are many clustering methods, there are many clustering-based outlier detection methods as well Clustering is expensive: straightforward adaption of a clustering method for outlier detection can be costly and does not scale up well for large data setsExample (right figure): two clusters All points not in R form a large cluster The two points in R form a tiny cluster, thus are outliersChapter 12. Outlier Analysis*Chapter 12. Outlier AnalysisOutlier and Outlier Analysis Outlier Detection Methods Statistical Approaches Proximity-Base Approaches Clustering-Base Approaches Classification Approaches Mining Contextual and Collective Outliers Outlier Detection in High Dimensional Data SummaryStatistical ApproachesStatistical ApproachesStatistical approaches assume that the objects in a data set are generated by a stochastic process (a generative model) Idea: learn a generative model fitting the given data set, and then identify the objects in low probability regions of the model as outliers Methods are divided into two categories: parametric vs. non-parametric Parametric method Assumes that the normal data is generated by a parametric distribution with parameter θ The probability density function of the parametric distribution f(x, θ) gives the probability that object x is generated by the distribution The smaller this value, the more likely x is an outlier Non-parametric method Not assume an a-priori statistical model and determine the model from the input data Not completely parameter free but consider the number and nature of the parameters are flexible and not fixed in advance Examples: histogram and kernel density estimation*Parametric Methods I: Detection Univariate Outliers Based on Normal DistributionParametric Methods I: Detection Univariate Outliers Based on Normal DistributionUnivariate data: A data set involving only one attribute or variable Often assume that data are generated from a normal distribution, learn the parameters from the input data, and identify the points with low probability as outliers Ex: Avg. temp.: {24.0, 28.9, 28.9, 29.0, 29.1, 29.1, 29.2, 29.2, 29.3, 29.4} Use the maximum likelihood method to estimate μ and σ*Taking derivatives with respect to μ and σ2, we derive the following maximum likelihood estimatesFor the above data with n = 10, we have Then (24 – 28.61) /1.51 = – 3.04 < –3, 24 is an outlier sinceParametric Methods I: The Grubb’s TestParametric Methods I: The Grubb’s TestUnivariate outlier detection: The Grubb's test (maximum normed residual test) ─ another statistical method under normal distribution For each object x in a data set, compute its z-score: x is an outlier if where is the value taken by a t-distribution at a significance level of α/(2N), and N is the # of objects in the data set*Parametric Methods II: Detection of Multivariate OutliersParametric Methods II: Detection of Multivariate OutliersMultivariate data: A data set involving two or more attributes or variables Transform the multivariate outlier detection task into a univariate outlier detection problem Method 1. Compute Mahalaobis distance Let ō be the mean vector for a multivariate data set. Mahalaobis distance for an object o to ō is MDist(o, ō) = (o – ō )T S –1(o – ō) where S is the covariance matrix Use the Grubb's test on this measure to detect outliers Method 2. Use χ2 –statistic: where Ei is the mean of the i-dimension among all objects, and n is the dimensionality If χ2 –statistic is large, then object oi is an outlier*Parametric Methods III: Using Mixture of Parametric DistributionsParametric Methods III: Using Mixture of Parametric DistributionsAssuming data generated by a normal distribution could be sometimes overly simplified Example (right figure): The objects between the two clusters cannot be captured as outliers since they are close to the estimated mean*To overcome this problem, assume the normal data is generated by two normal distributions. For any object o in the data set, the probability that o is generated by the mixture of the two distributions is given by where fθ1 and fθ2 are the probability density functions of θ1 and θ2 Then use EM algorithm to learn the parameters μ1, σ1, μ2, σ2 from data An object o is an outlier if it does not belong to any clusterNon-Parametric Methods: Detection Using HistogramNon-Parametric Methods: Detection Using HistogramThe model of normal data is learned from the input data without any a priori structure. Often makes fewer assumptions about the data, and thus can be applicable in more scenarios Outlier detection using histogram:*Figure shows the histogram of purchase amounts in transactions A transaction in the amount of $7,500 is an outlier, since only 0.2% transactions have an amount higher than $5,000 Problem: Hard to choose an appropriate bin size for histogram Too small bin size → normal objects in empty/rare bins, false positive Too big bin size → outliers in some frequent bins, false negative Solution: Adopt kernel density estimation to estimate the probability density distribution of the data. If the estimated density function is high, the object is likely normal. Otherwise, it is likely an outlier. Chapter 12. Outlier Analysis*Chapter 12. Outlier AnalysisOutlier and Outlier Analysis Outlier Detection Methods Statistical Approaches Proximity-Base Approaches Clustering-Base Approaches Classification Approaches Mining Contextual and Collective Outliers Outlier Detection in High Dimensional Data SummaryProximity-Based Approaches: Distance-Based vs. Density-Based Outlier DetectionProximity-Based Approaches: Distance-Based vs. Density-Based Outlier DetectionIntuition: Objects that are far away from the others are outliers Assumption of proximity-based approach: The proximity of an outlier deviates significantly from that of most of the others in the data set Two types of proximity-based outlier detection methods Distance-based outlier detection: An object o is an outlier if its neighborhood does not have enough other points Density-based outlier detection: An object o is an outlier if its density is relatively much lower than that of its neighbors*Distance-Based Outlier DetectionDistance-Based Outlier DetectionFor each object o, examine the # of other objects in the r-neighborhood of o, where r is a user-specified distance threshold An object o is an outlier if most (taking π as a fraction threshold) of the objects in D are far away from o, i.e., not in the r-neighborhood of o An object o is a DB(r, π) outlier if Equivalently, one can check the distance between o and its k-th nearest neighbor ok, where . o is an outlier if dist(o, ok) > r Efficient computation: Nested loop algorithm For any object oi, calculate its distance from other objects, and count the # of other objects in the r-neighborhood. If π∙n other objects are within r distance, terminate the inner loop Otherwise, oi is a DB(r, π) outlier Efficiency: Actually CPU time is not O(n2) but linear to the data set size since for most non-outlier objects, the inner loop terminates early*Distance-Based Outlier Detection: A Grid-Based MethodDistance-Based Outlier Detection: A Grid-Based MethodWhy efficiency is still a concern? When the complete set of objects cannot be held into main memory, cost I/O swapping The major cost: (1) each object tests against the whole data set, why not only its close neighbor? (2) check objects one by one, why not group by group? Grid-based method (CELL): Data space is partitioned into a multi-D grid. Each cell is a hyper cube with diagonal length r/2*Pruning using the level-1 & level 2 cell properties: For any possible point x in cell C and any possible point y in a level-1 cell, dist(x,y) ≤ r For any possible point x in cell C and any point y such that dist(x,y) ≥ r, y is in a level-2 cellThus we only need to check the objects that cannot be pruned, and even for such an object o, only need to compute the distance between o and the objects in the level-2 cells (since beyond level-2, the distance from o is more than r)Density-Based Outlier DetectionDensity-Based Outlier DetectionLocal outliers: Outliers comparing to their local neighborhoods, instead of the global data distribution In Fig., o1 and o2 are local outliers to C1, o3 is a global outlier, but o4 is not an outlier. However, proximity-based clustering cannot find o1 and o2 are outlier (e.g., comparing with O4).*Intuition (density-based outlier detection): The density around an outlier object is significantly different from the density around its neighbors Method: Use the relative density of an object against its neighbors as the indicator of the degree of the object being outliers k-distance of an object o, distk(o): distance between o and its k-th NN k-distance neighborhood of o, Nk(o) = {o’| o’ in D, dist(o, o’) ≤ distk(o)} Nk(o) could be bigger than k since multiple objects may have identical distance to oLocal Outlier Factor: LOFLocal Outlier Factor: LOFReachability distance from o’ to o: where k is a user-specified parameter Local reachability density of o:*LOF (Local outlier factor) of an object o is the average of the ratio of local reachability of o and those of o’s k-nearest neighbors The lower the local reachability density of o, and the higher the local reachability density of the kNN of o, the higher LOF This captures a local outlier whose local density is relatively low comparing to the local densities of its kNNChapter 12. Outlier Analysis*Chapter 12. Outlier AnalysisOutlier and Outlier Analysis Outlier Detection Methods Statistical Approaches Proximity-Base Approaches Clustering-Base Approaches Classification Approaches Mining Contextual and Collective Outliers Outlier Detection in High Dimensional Data SummaryClustering-Based Outlier Detection (1 & 2): Not belong to any cluster, or far from the closest oneClustering-Based Outlier Detection (1 & 2): Not belong to any cluster, or far from the closest oneAn object is an outlier if (1) it does not belong to any cluster, (2) there is a large distance between the object and its closest cluster , or (3) it belongs to a small or sparse cluster Case I: Not belong to any cluster Identify animals not part of a flock: Using a density-based clustering method such as DBSCAN Case 2: Far from its closest cluster Using k-means, partition data points of into clusters For each object o, assign an outlier score based on its distance from its closest center If dist(o, co)/avg_dist(co) is large, likely an outlier Ex. Intrusion detection: Consider the similarity between data points and the clusters in a training data setUse a training set to find patterns of “normal” data, e.g., frequent itemsets in each segment, and cluster similar connections into groups Compare new data points with the clusters mined—Outliers are possible attacks*nullFindCBLOF: Detect outliers in small clusters Find clusters, and sort them in decreasing size To each data point, assign a cluster-based local outlier factor (CBLOF): If obj p belongs to a large cluster, CBLOF = cluster_size X similarity between p and cluster If p belongs to a small one, CBLOF = cluster size X similarity betw. p and the closest large cluster*Clustering-Based Outlier Detection (3): Detecting Outliers in Small ClustersEx. In the figure, o is outlier since its c
本文档为【Data mining Concepts and techniques_12】,请使用软件OFFICE或WPS软件打开。作品中的文字与图均可以修改和编辑, 图片更改请在作品中右键图片并更换,文字修改请直接点击文字进行修改,也可以新增和删除文档中的内容。
该文档来自用户分享,如有侵权行为请发邮件ishare@vip.sina.com联系网站客服,我们会及时删除。
[版权声明] 本站所有资料为用户分享产生,若发现您的权利被侵害,请联系客服邮件isharekefu@iask.cn,我们尽快处理。
本作品所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用。
网站提供的党政主题相关内容(国旗、国徽、党徽..)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
下载需要: 免费 已有0 人下载
最新资料
资料动态
专题动态
is_149330
暂无简介~
格式:ppt
大小:2MB
软件:PowerPoint
页数:0
分类:互联网
上传时间:2013-12-25
浏览量:12