:: Volume 3, Issue 1 (9-2006) ::
JSRI 2006, 3(1): 1-22 Back to browse issues page
Outlier Detection by Boosting Regression Trees
Nathalie Chèze , Jean-Michel Poggi 1
1- , Jean-Michel.Poggi@math.u-psud.fr
Abstract:   (3398 Views)

A procedure for detecting outliers in regression problems is proposed. It is based on information provided by boosting regression trees. The key idea is to select the most frequently resampled observation along the boosting iterations and reiterate after removing it. The selection criterion is based on Tchebychev’s inequality applied to the maximum over the boosting iterations of the average number of appearances in bootstrap samples. So the procedure is noise distribution free. It allows to select outliers as particularly hard to predict observations. A lot of well-known bench data sets are considered and a comparative study against two well-known competitors allows to show the value of the method.

Keywords: Boosting, CART, outlier, regression.
Full-Text [PDF 3069 kb]   (2112 Downloads)    
Type of Study: Research | Subject: General
Received: 2016/02/13 | Accepted: 2016/02/13 | Published: 2016/02/13



XML   Persian Abstract   Print



Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Volume 3, Issue 1 (9-2006) Back to browse issues page