1
1735-1294
Statistical Research and Training Center - Statistical Centre of Iran
143
General
Determination of the Size of a Trial, Using Lindleyâ€™s Method
Pezeshk
H.
1
3
2005
1
2
101
108
09
02
2016
09
02
2016
Extended Abstract. When a new treatment is being considered, trials are carried out to estimate the increase in performance which is likely to result if the new treatment were to replace the treatment in current use. Many authors have looked at this problem and many procedures have been introduced to solve it. An important feature of the analysis in this work is that account is taken of the fact that only if it turns out that the new treatment, in a statistical sense, is clearly better than the previous treatment will the number of subsequent users of the new treatment be high.
Traditional classical methods of selecting sample sizes are based on the required size, and the required power of the test for a specified treatment effect. The most frequently used sample size formulae arises from the relationship between the standard error of the estimator of the parameter of interest and the sample size.
Since the formulae can be highly sensitive to the choice of inputs, careful selection of the parameter estimates and target criteria are essential steps in determining the sample size. Classical or frequentist methods are unable to take into account uncertainty in point specifications. Bayesian methods are ideally suited for design since they provide a tool for specifying uncertainty, and how it changes in response to further information.
In this work we figure out the optimal size of...[To continue please click here]
145
General
Row and Column Elimination Sampling Design +1 and its Efficiencies
Salehi M
M.
Jamalzadeh
A.
1
3
2005
1
2
109
122
09
02
2016
09
02
2016
Extended Abstract. It is a traditional way in biological, sociological, agricultural and geological studies to partition a geographical area into quadrats and then take a sample of them by a particular sampling design. We study the relevant characteristic of quadrats to estimate a parameter of the population. We suppose that the variable of interest has a positive spatial autocorrelation. Sampling designs which produce an appropriate coverage of the population will increase the precision of the parameter estimator, (Schreuder et al, 1993).
Hájek (1959), under a model with a positive spatial autocorrelation, illustrated that the systematic sampling is an optimum sampling design for one dimensional population. However, systematic and stratified samplings with only one sample in each stratum, are two traditional sampling designs that cover the population region well (McKenzie et al, 1991). Unfortunately, there is no unbiased estimator for these two sampling designs. Simple Latin Square Sampling (SLSS) design is another design which provides a good coverage for population. Also, this design has no variance estimator and it is considered as a weak point in practice. Munholland and Borkowski, (1995) introduced Simple Latin Square Sampling +1 (SLSS+1). They suggest that taking one additional sampling unit helps to provide an unbiased variance estimator. However, two other problems still exist concerning SLSS and SLSS+1. The population has to be a square and also the sample size be restricted to square root of the population size.
Salehi (2002) introduced Systematic Simple Latin Square Sampling (SSLSS) for...[To continue please click here]
142
General
Estimation of Scale Parameter in a Subfamily of Exponential Family with Weighted Balanced Loss Function
Parsian
A.
Jafari Jozani
M.
1
3
2005
1
2
123
142
09
02
2016
09
02
2016
Suppose x1,x2, x3, ..., xn is a random sample of size n from a distribution with pdf...[To continue please click here]
140
General
On Moments of the Concomitants of Classic Record Values and Nonparametric Upper Bounds for the Mean under the Farlie-Gumbel-Morgenstern Model
Amini
M.
Ahmadi
J.
1
3
2005
1
2
143
160
09
02
2016
09
02
2016
In a sequence of random variables, record values are observations that exceed or fall below the current extreme value.Now consider a sequence of pairwise random variables {(Xi,Yi), i>=1}, when the experimenter is interested in studying just thesequence of records of the first component, the second component associated with a record value of the first one is termed the concomitant of that record value. The aim of this paper is to investigate the properties of concomitants of record values in Farlie-Gumbel-Morgenstern (FGM) model. So, the sequence of upper record values and their associated concomitants can...[To continue please click here]
141
General
Truncated Linear Minimax Estimator of a Power of the Scale Parameter in a Lower- Bounded Parameter Space
Nematollahi
N.
Jafari Jozani
M.
1
3
2005
1
2
161
178
09
02
2016
09
02
2016
Minimax estimation problems with restricted parameter space reached increasing interest within the last two decades Some authors derived minimax and admissible estimators of bounded parameters under squared error loss and scale invariant squared error loss In some truncated estimation problems the most natural estimator to be considered is the truncated version of a classical estimator in the original problem The MLE in the truncated normal problem is one such example In exponential families a class of reasonable estimators of the mean in the unrestricted problem are the linear estimators which arise as (proper or generalized) Bayes estimators for conjugate families. Hence it is natural to consider a truncated version of such linear estimators.
A theme which runs through much of the literature on such truncated procedures is that while they are improving on the untruncated estimator, they themselves are inadmissible because they are not generalized Bayes.
In this paper we consider a subclass of the exponential families of distributions which includes Exponential, Weibull, Gamma, Normal, Inverse Gaussian and some other distributions. The minimax and linear admissible estimators of the r-th power of scale-parameter under scale-invariant squared-error loss are obtained. Also the class of truncated linear estimators of the r-th power of the lower-bounded scale parameter in this family is considered. It is shown that each member of this class is inadmissible and exactly one of them is minimax, under scale-invariant squared-error loss. Further, this minimax estimator is compared with admissible minimax estimator of the lower[1]bounded scale-parameter, which is obtained by Jafari Jozani et al. (2002). Dealing with the family of transformed Chi-square distributions, which is introduced by Rahman and Gupta (1993), we apply our result for their lower bounded parameters which are not necessarily scale parameters. We show that the truncated linear minimax estimator obtained by van Eeden (1995) in gamma distribution is a special case of our estimator.
144
General
Cumulative Risk Estimation for Chemical Mixtures
Razzaghi
M.
1
3
2005
1
2
196
208
09
02
2016
09
02
2016
In reality, humans are always exposed to a combination of toxic substances and seldom to a single agent. Simultaneous exposure to a multitude of chemicals could result in unexpected consequences. The combined risk may lead to greater or less than a simple summation of the effects induced by chemicals given individually. Here, a method is proposed for estimating the cumulative risk which is the risk associated with exposure to more than one chemical through different routes. The method is based on using the data to determine a suitable power transformation of the dose of each component of the mixture and fitting a doseresponse model to the mixture under doseaddition. Necessary and sufficient conditions for the constancy of the relative potency between two chemicals in terms of the slopes of their corresponding doseresponse models are derived and it is shown how the relative potency may be estimated directly from the joint doseresponse model of the mixture. An example using a mixture of four chemicals is used for illustration.