首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
This paper studies a sequential procedure R for selecting a random size subset that contains the multinomial cell which has the smallest cell probability. The stopping rule of the proposed procedure R is the composite of the stopping rules of curtailed sampling, inverse sampling, and the Ramey-Alam sampling. A reslut on the worst configuration is shown and it is employed in computing the procedure parameters that guarantee certain probability requirements. Tables of these procedure parameters, the corresponding probability of correct selection, the expected sample size, and the expected subset size are given for comparison purpose.  相似文献   

2.
A two-sample partially sequential probability ratio test (PSPRT) is considered for the two-sample location problem with one sample fixed and the other sequential. Observations are assumed to come from two normal poptilatlons with equal and known variances. Asymptotically in the fixed-sample size the PSPRT is a truncated Wald one sample sequential probability test. Brownian motion approximations for boundary-crossing probabilities and expected sequential sample size are obtained. These calculations are compared to values obtained by Monte Carlo simulation.  相似文献   

3.
In this article, an attempt has been made to settle the question of existence of unbiased estimator of the key parameter p of the quasi-binomial distributions of Type I (QBD I) and of Type II (QBD II), with/without any knowledge of the other parameter φ appearing in the expressions for probability functions of the QBD's. This is studied with reference to a single observation, a random sample of finite size m as also with samples drawn by suitably defined sequential sampling rules.  相似文献   

4.
The problem of selecting the best population from among a finite number of populations in the presence of uncertainty is a problem one faces in many scientific investigations, and has been studied extensively, Many selection procedures have been derived for different selection goals. However, most of these selection procedures, being frequentist in nature, don't tell how to incorporate the information in a particular sample to give a data-dependent measure of correct selection achieved for this particular sample. They often assign the same decision and probability of correct selection for two different sample values, one of which actually seems intuitively much more conclusive than the other. The methodology of conditional inference offers an approach which achieves both frequentist interpret ability and a data-dependent measure of conclusiveness. By partitioning the sample space into a family of subsets, the achieved probability of correct selection is computed by conditioning on which subset the sample falls in. In this paper, the partition considered is the so called continuum partition, while the selection rules are both the fixed-size and random-size subset selection rules. Under the distributional assumption of being monotone likelihood ratio, results on least favourable configuration and alpha-correct selection are established. These re-sults are not only useful in themselves, but also are used to design a new sequential procedure with elimination for selecting the best of k Binomial populations. Comparisons between this new procedure and some other se-quential selection procedures with regard to total expected sample size and some risk functions are carried out by simulations.  相似文献   

5.
In this study, we propose a group sequential procedure that allows the change of necessary sample size at intermediary stage in sequential test. In the procedure, we formulate the conditional power to judge the necessity of the change of sample size in decision rules. Furthermore, we present an integral formula of the power of the test and show how to change the necessary sample size by using the power of the test. In simulation studies, we investigate the characteristics of the change of sample size and the pattern of decision across all stages based on generated normal random numbers.  相似文献   

6.
Abstract

Optimized group sequential designs proposed in the literature have designs minimizing average sample size with respect to a prior distribution of treatment effect with overall type I and type II error rates well-controlled (i.e., at final stage). The optimized asymmetric group sequential designs that we present here additionally consider constrains on stopping probabilities at stage one: probability of stopping for futility at stage one when no drug effect exists as well as the probability of rejection when the maximum effect size is true at stage one so that accountability of group sequential design is ensured from the first stage throughout.  相似文献   

7.
A modification of the sequential probability ratio test is proposed in which Wald's parallel boundaries are broken at some preassigned point of the sample number axis and Anderson's converging boundaries are used prior to that. Read's partial sequential probability ratio test can be considered as a special case of the proposed procedure. As far as 'the maximum average sample number reducing property is concerned, the procedure is as good as Anderson's modified sequential probability ratio test.  相似文献   

8.
Sample size calculations in clinical trials need to be based on profound parameter assumptions. Wrong parameter choices may lead to too small or too high sample sizes and can have severe ethical and economical consequences. Adaptive group sequential study designs are one solution to deal with planning uncertainties. Here, the sample size can be updated during an ongoing trial based on the observed interim effect. However, the observed interim effect is a random variable and thus does not necessarily correspond to the true effect. One way of dealing with the uncertainty related to this random variable is to include resampling elements in the recalculation strategy. In this paper, we focus on clinical trials with a normally distributed endpoint. We consider resampling of the observed interim test statistic and apply this principle to several established sample size recalculation approaches. The resulting recalculation rules are smoother than the original ones and thus the variability in sample size is lower. In particular, we found that some resampling approaches mimic a group sequential design. In general, incorporating resampling of the interim test statistic in existing sample size recalculation rules results in a substantial performance improvement with respect to a recently published conditional performance score.  相似文献   

9.
For given (small) a and β a sequential confidence set that covers the true parameter point with probability at least 1 - a and one or more specified false parameter points with probability at most β can be generated by a family of sequen-tial tests. Several situations are described where this approach would be a natural one. The following example is studied in some detail: obtain an upper (1 - α)-confidence interval for a normal mean μ (variance known) with β-protection at μ - δ(μ), where δ(.) is not bounded away from 0 so that a truly sequential procedure is mandatory. Some numerical results are presented for intervals generated by (1) sequential probability ratio tests (SPRT's), and (2) generalized sequential probability ratio tests (GSPRT's). These results indicate the superiority of the GSPRT-generated intervals over the SPRT-generated ones if expected sample size is taken as performance criterion  相似文献   

10.
A class of closed inverse sampling procedures R(n,m) for selecting the multinomial cell with the largest probability is considered; here n is the maximum sample size that an experimenter can take and m is the maximum frequency that a multinomial cell can have. The proposed procedures R(n,m) achieve the same probability of a correct selection as do the corresponding fixed sample size procedures and the curtailed sequential procedures when m is at least n/2. A monotonicity property on the probability of a correct selection is proved and it is used to find the least favorable configurations and to tabulate the necessary probabilities of a correct selection and corresponding expected sample sizes  相似文献   

11.
For the model considered by Chaturvedi, Pandey and Gupta (1991), two classes of sequential procedures are developed to construct confidence regions (which may be interval, ellipsoidal or spherical) of ‘pre-assigned width and coverage probability’ for the parameters of interest and for the minimum risk point estimation (taking loss to be quadratic plus linear cost of sampling) of the nuisance parameter. Second-Order approximations are derived for the expected sample size, coverage probability and ‘regret’ associated with the two classes of sequential procedures. A simple and direct method of obtaining the asymptotic distribution of the stopping time is provided. By means of examples, it is illustrated that several estimation problems can be tackled with the help of proposed classes of sequential procedures.  相似文献   

12.
We consider the empirical Bayes decision theory where the component problems are the optimal fixed sample size decision problem and a sequential decision problem. With these components, an empirical Bayes decision procedure selects both a stopping rule function and a terminal decision rule function. Empirical Bayes stopping rules are constructed for each case and the asymptotic behaviours are investigated.  相似文献   

13.
The GARCH and stochastic volatility (SV) models are two competing, well-known and often used models to explain the volatility of financial series. In this paper, we consider a closed form estimator for a stochastic volatility model and derive its asymptotic properties. We confirm our theoretical results by a simulation study. In addition, we propose a set of simple, strongly consistent decision rules to compare the ability of the GARCH and the SV model to fit the characteristic features observed in high frequency financial data such as high kurtosis and slowly decaying autocorrelation function of the squared observations. These rules are based on a number of moment conditions that is allowed to increase with sample size. We show that our selection procedure leads to choosing the model that fits best, or the simplest model under equivalence, with probability one as the sample size increases. The finite sample size behavior of our procedure is analyzed via simulations. Finally, we provide an application to stocks in the Dow Jones industrial average index.  相似文献   

14.
We consider methods of computing exactly the probability of “acceptance” and the “average sample size needed” for the sequential probability ratio test (SPRT) and likewise the newer “2-SPRT,” concerning the value of a Bernoulli parameter. The methods permit one to approximate, iteratively, the desired operating characteristics for the test.  相似文献   

15.
The problem of selecting the best of k populations is studied for data which are incomplete as some of the values have been deleted randomly. This situation is met in extreme value analysis where only data exceeding a threshold are observable. For increasing sample size we study the case where the probability that a value is observed tends to zero, but the sparse condition is satisfied, so that the mean number of observable values in each population is bounded away from zero and infinity as the sample size tends to infinity. The incomplete data are described by thinned point processes which are approximated by Poisson point processes. Under weak assumptions and after suitable transformations these processes converge to a Poisson point process. Optimal selection rules for the limit model are used to construct asymptotically optimal selection rules for the original sequence of models. The results are applied to extreme value data for high thresholds data.  相似文献   

16.
Sequential fixed-width and risk-efficient estimation of the variance of an unspecified distribution is considered. The second-order asymptotic properties of the sequential rules are studied. Extensive simulation studies are carried out in order to study the small sample behavior of the sequential rules for some frequently used distributions.  相似文献   

17.
In this paper, given an arbitrary fixed target sample size, we describe a sequential allocation scheme for comparing two competing treatments in clinical trials. The proposed scheme is a compromise between ethical and optimum allocations. Using some specific probability models, we have shown that, for estimating the risk difference (RD) between two treatment effects, the scheme provides smaller variance than that provided by the corresponding fixed sample size equal allocation sampling scheme.  相似文献   

18.
In clinical trials, a covariate-adjusted response-adaptive (CARA) design allows a subject newly entering a trial a better chance of being allocated to a superior treatment regimen based on cumulative information from previous subjects, and adjusts the allocation according to individual covariate information. Since this design allocates subjects sequentially, it is natural to apply a sequential method for estimating the treatment effect in order to make the data analysis more efficient. In this paper, we study the sequential estimation of treatment effect for a general CARA design. A stopping criterion is proposed such that the estimates satisfy a prescribed precision when the sampling is stopped. The properties of estimates and stopping time are obtained under the proposed stopping rule. In addition, we show that the asymptotic properties of the allocation function, under the proposed stopping rule, are the same as those obtained in the non-sequential/fixed sample size counterpart. We then illustrate the performance of the proposed procedure with some simulation results using logistic models. The properties, such as the coverage probability of treatment effect, correct allocation proportion and average sample size, for diverse combinations of initial sample sizes and tuning parameters in the utility function are discussed.  相似文献   

19.
Kernel discriminant analysis translates the original classification problem into feature space and solves the problem with dimension and sample size interchanged. In high‐dimension low sample size (HDLSS) settings, this reduces the ‘dimension’ to that of the sample size. For HDLSS two‐class problems we modify Mika's kernel Fisher discriminant function which – in general – remains ill‐posed even in a kernel setting; see Mika et al. (1999). We propose a kernel naive Bayes discriminant function and its smoothed version, using first‐ and second‐degree polynomial kernels. For fixed sample size and increasing dimension, we present asymptotic expressions for the kernel discriminant functions, discriminant directions and for the error probability of our kernel discriminant functions. The theoretical calculations are complemented by simulations which show the convergence of the estimators to the population quantities as the dimension grows. We illustrate the performance of the new discriminant rules, which are easy to implement, on real HDLSS data. For such data, our results clearly demonstrate the superior performance of the new discriminant rules, and especially their smoothed versions, over Mika's kernel Fisher version, and typically also over the commonly used naive Bayes discriminant rule.  相似文献   

20.
Consider a longitudinal experiment where subjects are allocated to one of two treatment arms and are subjected to repeated measurements over time. Two non-parametric group sequential procedures, based on the Wilcoxon rank sum test and fitted with asymptotically efficient allocation rules, are derived to test the equality of the rates of change over time of the two treatments, when the distribution of responses is unknown. The procedures are designed to allow for early stopping to reject the null hypothesis while allocating less subjects to the inferior treatment. Simulations – based on the normal, the logistic and the exponential distributions – showed that the proposed allocation rules substantially reduce allocations to the inferior treatment, but at the expense of a relatively small increase in the total sample size and a moderate decrease in power as compared to the pairwise allocation rule.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号