[Home ] [Archive]    
:: Volume 3, Issue 1 (9-2006) ::
JSRI 2006, 3(1): 1-22 Back to browse issues page
Outlier Detection by Boosting Regression Trees
Nathalie Chèze, Jean-Michel Poggi *1
1- , Jean-Michel.Poggi@math.u-psud.fr
Abstract:   (2872 Views)

A procedure for detecting outliers in regression problems is proposed. It is based on information provided by boosting regression trees. The key idea is to select the most frequently resampled observation along the boosting iterations and reiterate after removing it. The selection criterion is based on Tchebychev’s inequality applied to the maximum over the boosting iterations of the average number of appearances in bootstrap samples. So the procedure is noise distribution free. It allows to select outliers as particularly hard to predict observations. A lot of well-known bench data sets are considered and a comparative study against two well-known competitors allows to show the value of the method.

Keywords: Boosting, CART, outlier, regression.
Full-Text [PDF 3069 kb]   (1293 Downloads)    
Type of Study: Research | Subject: General
Received: 2016/02/13 | Accepted: 2016/02/13 | Published: 2016/02/13
Send email to the article author

Add your comments about this article
Your username or Email:


XML   Persian Abstract   Print

Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Chèze N, Poggi J. Outlier Detection by Boosting Regression Trees. JSRI. 2006; 3 (1) :1-22
URL: http://jsri.srtc.ac.ir/article-1-163-en.html

Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Volume 3, Issue 1 (9-2006) Back to browse issues page
مجله‌ی پژوهش‌های آماری ایران Journal of Statistical Research of Iran JSRI
Persian site map - English site map - Created in 0.06 seconds with 29 queries by YEKTAWEB 4447