Statistical Research and Training Center - Statistical Centre of Iran
Journal of Statistical Research of Iran JSRI
1735-1294
6
1
2009
9
1
Evaluation and Application of the Gaussian-Log Gaussian Spatial Model for Robust Bayesian Prediction of Tehran Air Pollution Data
1
24
EN
R.
Zareifard
zareifard@modares.ac.ir
M.
Jafari Khaledi
jafari-m@modares.ac.ir
10.18869/acadpub.jsri.6.1.1
Air pollution is one of the major problems of Tehran metropolis. Regarding the fact that Tehran is surrounded by Alborz Mountains from three sides, the pollution due to the cars traffic and other polluting means causes the pollutants to be trapped in the city and have no exit without appropriate wind guff. Carbon monoxide (CO) is one of the most important sources of pollution in Tehran air. The concentration of carbon monoxide increases remarkably at the city regions with heavy traffic. Due to the negative effects of this gas on breathing metabolism and people brain activities, the modeling and classifying of the CO amounts in order to control and reduce it, is very noteworthy. For this reason Rivaz et al. (2007) using a Gaussian model presented the space-time analysis of the Tehran air pollution based on the observations from 11 stations for measuring the air pollution. Although assuming the Gaussian observations causes the simplicity of the inferences such as prediction, but often this assumption is not true in reality. One of the outrage factors from normality assumption is the outlying observations. For example in Tehran air pollution issue, the Sorkhe Hesar station indicates very low pollution compare to the other stations due to locating in a forest region. Therefore this observation could be considered as an outlying observation. Whereas the presence of such data causes the thickening of distribution tails and increasing the kurtosis coefficient, therefore in this situation normal distribution which has a narrower tails can not be used. Generally identifying and modeling the outlying observations is one of the main issues that statistician have been faced with since long time ago and many different solutions have been presented so far to overcome the problems arising from such observations. Amongst all these solutions, robust methods can be mentioned (Militino et al., 2006, and Cerioli and Riani, 1999). In these methods with normality observations assumption, the aim is to present a robust analysis. But there might be an outlying observation which belongs to the same pattern of other data. In this case applying those distributions with thicker tails compare to the normal distribution could be useful. This matter was evaluated by Jeffreys (1961) for the first time. Maronna (1976) and Lang et al. (1989) evaluated the verifying maximum likelihood estimation for the model in which the errors imitating the student-t distribution. West (1984) also used the scale mixture of normal distribution families for modeling the outlying observations. Fernandez and Steel (2000) also evaluated the existence of posterior distribution and its moments by introducing the improper prior distributions for West model. In the field of geostatistical data, Palacios and Steel (2006) introduced the extended Gaussian model as below by considering the errors distribution from the scale mixture of normal distributions family....(to countinue here)
Gaussian-log Gaussian spatial model, robust spatial prediction, Bayesian approach, highest posterior density region, Markov chain Mont Carlo methods, mean square prediction error
http://jsri.srtc.ac.ir/article-1-105-en.html
http://jsri.srtc.ac.ir/article-1-105-en.pdf
Statistical Research and Training Center - Statistical Centre of Iran
Journal of Statistical Research of Iran JSRI
1735-1294
6
1
2009
9
1
A Survey on Simulating Stable Random Variables
25
36
EN
Mehdi
Firouzi
mehdi.firuzi@ymail.com
Adel
Mohammadpour
adel@aut.ac.ir
10.18869/acadpub.jsri.6.1.25
In general case, Chambers et al. (1976) introduced the following algorithm for simulating any stable random variables $ X/sim(alpha, beta, gamma, delta) $ with four parameters. They use a nonlinear transformation of two independent uniform random variables for simulating an stable random variable... (to continue, click here)
Stable distributions, characteristic function, simulating random variable
http://jsri.srtc.ac.ir/article-1-103-en.html
http://jsri.srtc.ac.ir/article-1-103-en.pdf
Statistical Research and Training Center - Statistical Centre of Iran
Journal of Statistical Research of Iran JSRI
1735-1294
6
1
2009
9
1
Discriminant Analysis for ARMA Models Based on Divergency Criterion: A Frequency Domain Approach
37
56
EN
Rahim
Chinipardaz
chinipardaz_r@scu.ac.ir
Behzad
Mansoury
bms598@yahoo.com
Sara
Shafiei Babaei
shafieesara88@yahoo.com
10.18869/acadpub.jsri.6.1.37
The extension of classical analysis to time series data is the basic problem faced in many fields, such as engineering, economic and medicine. The main objective of discriminant time series analysis is to examine how far it is possible to distinguish between various groups. There are two situations to be considered in the linear time series models. Firstly when the main discriminatory information contained in mean function and secnodly when the mean functions are equal and discriminotary information is in the autocovariance functions. The latter case is more interested because the the first case is well documnted.
The classical method for discrimation of time series is based on likelihood ratio approach. Using this approach the vector x has to be allocated to H1 or H2 leads to the discriminant function dQ(x)=x-prim(R2-1 -R1-1)x Shumway and Stoffer, 2006) in where x is the the R1 and R2 are the covariance matrices under H1 and H2 models, respectively.
Another approach is based on assessing distance between models. Two important and common distance measuers, Kullback-Leibler information measure, KL, and Chernoff information measure, CH. In this case x is allocated to H1 or H2 or depending on disparity measure between the sample spectrum of x and two models ( H1 or H2 ), (see Kakizawa et al., 1998).
In this article KL and CH have been adopted to both autoregressive models and moving average models.
Three methods, classical method (call Shumway method) has been compared with KL and CH criterions.
The peformance of method has been carried out using a numerical study. One hundred time series each of length two hundred points simulated from the first model, say H1 , and subjected to the discrimination criterion obtained from Kullback-Leibler, KL, and Chernoff, CH, crierions. The number of series that were misclassified out of hundred was noted.
The results showed that three method work well for both autoregressive or order of one and moving average order of one models. The miscllasifican rate decreases when the distance between two populations increases. However, the performance of KL method is superior than both CH and Shumway methods.
ARMA models, spectral analysis, Kullback- Leibler information measure, Chernoff information measuren
http://jsri.srtc.ac.ir/article-1-101-en.html
http://jsri.srtc.ac.ir/article-1-101-en.pdf
Statistical Research and Training Center - Statistical Centre of Iran
Journal of Statistical Research of Iran JSRI
1735-1294
6
1
2009
9
1
Functional-Coefficient Autoregressive Model and its Application for Prediction of the Iranian Heavy Crude Oil Price
57
72
EN
Parvin
Jalili
p.jalili@cbi.ir
Mojtaba
Khazaei
m_khazaei@sbu.ac.ir
10.18869/acadpub.jsri.6.1.57
Time series and their methods of analysis are important subjects in statistics. Most of time series have a linear behavior and can be modelled by linear ARIMA models. However, some of realized time series have a nonlinear behavior and for modelling them one needs nonlinear models. For this, many good parametric nonlinear models such as bilinear model, exponential autoregressive model, threshold autoregressive model and GARCH model are commonly used.
On analysis of nonlinear time series, when there is no priori information, identification of a parametric nonlinear model, due to expanded nonlinear relation, is very difficult. A suitable substitution for parametric nonlinear models is the use of nonparametric models and their methods of analysis. In this direction various models and methods of analysis are introduced. One of the most important of such models is the functional-coefficient autoregressive (FAR) model. Flexibility of FAR models on fitting to real observation makes this model very useful as an applied methods in modelling economic survey, hydrology, and other related subjects. Many parametric models such as autoregressive model, threshold autoregressive model and exponential autoregressive model may be obtained as special cases of FAR models.
In this paper, using Chen and Tsay (1993) and Cai et al. (2000), we introduce FAR model and a method for fitting this model. Also, some methods for prediction of future of the time series using the fitted model are presented. Specially, a bootstrap predictive method for prediction of m-steps ahead of the time series is introduced. Using the bootstrap method point, interval and distribution of predictions are obtained.
We use FAR model for modelling of Iranian heavy crude oil price from July 1994 to December 2007. For this aim, using average prediction error criteria we identify a FAR (2,1) model with smoothing parameter h=1.3. This model is fitted using locally linear regression with Epanechnikov kernel. Examination of ability of prediction of the model and performance of the bootstrap prediction show that the fitted model has good performance. We finally use fitted model and bootstrap method for prediction of the first 3 months of 2008 of Iranian heavy crude oil price. Comparesion of these predictions with their real values shows a maximum of absolute differences of $4.75. This was related to March 2008 and it had a relative error of %5 with respect to the real value.
Functional-coefficient autoregressive model, locally linear regression method, bootstrap method, kernel function, prediction of the oil price
http://jsri.srtc.ac.ir/article-1-104-en.html
http://jsri.srtc.ac.ir/article-1-104-en.pdf
Statistical Research and Training Center - Statistical Centre of Iran
Journal of Statistical Research of Iran JSRI
1735-1294
6
1
2009
9
1
Some New Methods for Prediction of Time Series by Wavelets
73
92
EN
Somayeh
Mireh
s_mireh@aut.ac.ir
Mina
aminghafari
aminghafari@aut.ac.ir
10.18869/acadpub.jsri.6.1.73
Extended Abstract. Forecasting is one of the most important purposes of time series analysis. For many years, classical methods were used for this aim. But these methods do not give good performance results for real time series due to non-linearity and non-stationarity of these data sets.
On one hand, most of real world time series data display a time-varying second order structure.
On the other hand, wavelets are well situated for forecasting non-stationary time series since wavelets are local which is very important property to analyze non-stationary time series.
To extend stationary processes, slow changes are allowed over time of the second order structure of time series or equivalently the amplitude in spectral representation is allowed to depend on time the spectral representation. Dahlhaus (1997) proposes a minimum distance estimation procedure for non-stationary time series models. Ombao et al. (2002) defined non-stationary processes by changing covariance stationary processes over time.
Instead of using windowed Fourier transform (as in Priestley 1965), Nason et al. (2000) introduce the Locally Stationary Wavelet (LSW) processes, using the (non-decimated) wavelet transform and the rescaled time principle.
Fryzlewicz et al. (2003) suggest an algorithm to predict LSW processes. This algorithm was used by Van Bellegem and Von Sachs (2002) to model financial log-return series.
Forecasting LSW processes arrives to a generalization of the Yule-Walker equations, which can be solved numerically by matrix inversion or standard iterative algorithms such as the innovations algorithm. In the stationary case, these equations reduced to ordinary Yule-Walker equations.
In all the above articles, the point wise prediction of discrete time series has been considered, but functional prediction (prediction of an interval) of time series can be considered instead of a point wise prediction since for continuous time series, interval prediction is more suitable than point wise prediction. Antoniadis et al. (2006) propose functional wavelet-kernel smoothing method. This method uses interpolating wavelet transform which is not most popular wavelet transform.
The predictor may be seen as a weighted average of the past paths, associating more weight on those paths which are more similar to the present one. Hence, the ‘similar blocks’ are to be found.
To summarize, this method is performed in two following steps:
1.We should find within the past paths the ones that are ‘similar’ or ‘close’ to the last observed path. This determines the weights;
2.then we use locally weighted average using obtained weights to predict time series.
In this article, after describing mentioned methods, we suggest some extensions to the functional wavelet-kernel method to forecast time series by means of wavelets and then we compare that with several prediction methods. We propose to use two different types of wavelet transform instead of interpolating wavelet transform: discrete wavelet transform (DWT) and non-decimated wavelet transform (NDWT). The first one is an orthogonal wavelet transform while the second one is a redundant transform. These transformations are applied more than interpolating wavelet transform and can be used easily in most of mathematical programming software as S-Plus and MATLAB.
We consider the following methods: the methods proposed by Fryzlewics et al. (2003) and Antoniadis et al. (2006), the classical autoregressive model and our two proposed methods. Then, we compare these methods by simulation and real data. We simulate the data from AR(7) (stationary data) and AR(7) contaminated by a sinusoid (non stationary data). We also consider two real data set; Electricity Paris Consumption and El-Nino data. In our comparison, our methods give better results than other compared methods.
In this paper we also show that mean square prediction error converges to zero under some conditions when the sample size is large.
Reference
Antoniadis, A., Paparoditis, E. and Sapatinas, T. (2006). A functional wavelet-kernel approach for continuous-time prediction. J. R. Statist. Soc. B, 68, 837-857.
Dahlhaus, R. (1997). Fitting time series models to nonstationary processes. Ann. Statist., 25, 1-37.
Fryzlewicz, P., Van Bellegem, S. and von Sachs, R. (2003). Forecasting nonstationary time series by wavelet process modeling. Annals of the Institude of statistical Mathematics, 55, 737-764.
Nason, G.P., von Sachs, R. and Kroisandt, G. (2000). Wavelet processes and adaptive estimation of evolutionary wavelet spectra. J. Roy. Statist. Soc. Ser. B, 62, 271-292.
Ombao, H., Raz, J., von Sachs, R. and Guo, W. (2002). The SLEX model of a nonstationary random process. Ann. Inst. Statist. Math., 54, 171-200.
Priestley, M. (1965). Evolutionary spectra and nonstationary processes. J. Roy. Statist. Soc. Ser. B, 27, 204-237.
Van Bellegem, S. and Von Sachs, R. (2002). Forecasting Economic Time Series using Models of Nonstationary (Discussion paper NO. 0227). Institut de statistique, UCL.
Forecasting, non stationary time series, wavelet, interpolating wavelet, functional wavelet kernel smoothing
http://jsri.srtc.ac.ir/article-1-107-en.html
http://jsri.srtc.ac.ir/article-1-107-en.pdf
Statistical Research and Training Center - Statistical Centre of Iran
Journal of Statistical Research of Iran JSRI
1735-1294
6
1
2009
9
1
Correlation Pattern between Temperatur, Humidity and Precipitaion by using Functional Canonical Correlation
93
120
EN
A.
Mottaghi Golshan
E.
Hosseini-nasab
m_hosseininasab@sbu.ac.ir
R.
Farid Rohani
10.18869/acadpub.jsri.6.1.93
Understanding dependence structure and relationship between two sets of variables is of main interest in statistics. When encountering two large sets of variables, a researcher can express the relationship between the two sets by extracting only finite linear combinations of the original variables that produce the largest correlations with the second set of variables.
When data are continuous functions of another variable (generally time), the methods of multivariate analysis can not be applied to the data. Therefore, some theoretical justification is needed to provide the required definitions and concepts regarding the essential nature of the data. This leads to difining canonical correlation for pairs of random functions called functional canonical correlation (FCCA).
If the data related to functional phenomena, are observed discretely, the first task is to convert these observations to appropriate curves. This is due to functional nature of the phenomena that the data related to. On the other hand, the functional quantity of interest may be measured with error. In such cases, we should first remove the observational error bay taking a smoothing procedure to account. In this paper, Iranian weather data, collected in 2006, are treated by using FCCA. The data set contains discret measurments of three phenomena: temperature,humidity and precipitation and was collected from 102 weather stations. we have fitted continuous curves to the original data, and then extrated the correlation patterns between each pair of the three phenomena.
Functional data analysis, functional canonical correlation, covariance operator, Hilbert-Schmidt, smoothing
http://jsri.srtc.ac.ir/article-1-100-en.html
http://jsri.srtc.ac.ir/article-1-100-en.pdf
Statistical Research and Training Center - Statistical Centre of Iran
Journal of Statistical Research of Iran JSRI
1735-1294
6
1
2009
9
1
Empirical Likelihood Approach and its Application on Survival Analysis
121
139
EN
Mohadese
Safakish
m.safakish_atu@yahoo.com
reza
Navvabpour
h.navvabpour@srtc.ac.ir
10.18869/acadpub.jsri.6.1.121
A number of nonparametric methods exist when studying the population and its parameters in the situation when the distribution is unknown. Some of them such as "resampling bootstrap method" are based on resampling from an initial sample.
In this article empirical likelihood approach is introduced as a nonparametric method for more efficient use of auxiliary information to construct confidence regions.
In empirical likelihood approach a Lagrange multipliers method is applied to estimate ...(to countinue here)
Empirical likelihood, estimating equation, Kaplan-Meier estimator, median regression, right censored, profile empirical likelihood, empirical likelihood bootstrap method
http://jsri.srtc.ac.ir/article-1-106-en.html
http://jsri.srtc.ac.ir/article-1-106-en.pdf