首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   53篇
  免费   1篇
管理学   6篇
丛书文集   1篇
综合类   3篇
社会学   2篇
统计学   42篇
  2023年   1篇
  2020年   1篇
  2019年   3篇
  2018年   2篇
  2017年   4篇
  2016年   2篇
  2015年   3篇
  2014年   1篇
  2013年   10篇
  2012年   4篇
  2011年   3篇
  2010年   3篇
  2009年   2篇
  2008年   2篇
  2007年   3篇
  2004年   2篇
  2002年   1篇
  2001年   1篇
  1999年   2篇
  1998年   2篇
  1994年   1篇
  1993年   1篇
排序方式: 共有54条查询结果,搜索用时 15 毫秒
41.
This paper proposes an estimator of the unknown size of a target population to which has been added a planted population of known size. The augmented population is observed for a fixed time and individuals are sighted according to independent Poisson processes. These processes may be time-inhomogeneous, but, within each population, the intensity function is the same for all individuals. When the two populations have the same intensity function, known results on factorial series distributions suggest that the proposed estimator is approximately unbiased and provide a useful estimator of standard deviation. Except for short sampling times, computational results confirm that the proposed population-size estimator is nearly unbiased, and indicate that it gives a better overall performance than existing estimators in the literature. The robustness of this performance is investigated in situations in which it cannot be assumed that the behaviour of the plants matches that of individuals from the target population.  相似文献   
42.
We consider estimation of the unknown parameters of Chen distribution [Chen Z. A new two-parameter lifetime distribution with bathtub shape or increasing failure rate function. Statist Probab Lett. 2000;49:155–161] with bathtub shape using progressive-censored samples. We obtain maximum likelihood estimates by making use of an expectation–maximization algorithm. Different Bayes estimates are derived under squared error and balanced squared error loss functions. It is observed that the associated posterior distribution appears in an intractable form. So we have used an approximation method to compute these estimates. A Metropolis–Hasting algorithm is also proposed and some more approximate Bayes estimates are obtained. Asymptotic confidence interval is constructed using observed Fisher information matrix. Bootstrap intervals are proposed as well. Sample generated from MH algorithm are further used in the construction of HPD intervals. Finally, we have obtained prediction intervals and estimates for future observations in one- and two-sample situations. A numerical study is conducted to compare the performance of proposed methods using simulations. Finally, we analyse real data sets for illustration purposes.  相似文献   
43.
Ragin’s Qualitative Comparative Analysis (QCA) is often used with small to medium samples where the researcher has good case knowledge. Employing it to analyse large survey datasets, without in-depth case knowledge, raises new challenges. We present ways of addressing these challenges. We first report a single QCA result from a configurational analysis of the British National Child Development Study dataset (highest educational qualification as a set theoretic function of social class, sex and ability). We then address the robustness of our analysis by employing Du?a and Thiem’s R QCA package to explore the consequences of (i) changing fuzzy set theoretic calibrations of ability, (ii) simulating errors in measuring ability and (iii) changing thresholds for assessing the quasi-sufficiency of causal configurations for educational achievement. We also consider how the analysis behaves under simulated re-sampling, using bootstrapping. The paper offers suggested methods to others wishing to use QCA with large n data.  相似文献   
44.
We apply some log-linear modelling methods, which have been proposed for treating non-ignorable non-response, to some data on voting intention from the British General Election Survey. We find that, although some non-ignorable non-response models fit the data very well, they may generate implausible point estimates and predictions. Some explanation is provided for the extreme behaviour of the maximum likelihood estimates for the most parsimonious model. We conclude that point estimates for such models must be treated with great caution. To allow for the uncertainty about the non-response mechanism we explore the use of profile likelihood inference and find the likelihood surfaces to be very flat and the interval estimates to be very wide. To reduce the width of these intervals we propose constraining confidence regions to values where the parameters governing the non-response mechanism are plausible and study the effect of such constraints on inference. We find that the widths of these intervals are reduced but remain wide.  相似文献   
45.
We research an adaptive maximum‐likelihood–type estimation for an ergodic diffusion process where the observation is contaminated by noise. This methodology leads to the asymptotic independence of the estimators for the variance of observation noise, the diffusion parameter, and the drift one of the latent diffusion process. Moreover, it can lessen the computational burden compared to simultaneous maximum likelihood–type estimation. In addition to adaptive estimation, we propose a test to see if noise exists or not and analyze real data as the example such that the data contain observation noise with statistical significance.  相似文献   
46.
In this paper we consider and propose some confidence intervals for estimating the mean or difference of means of skewed populations. We extend the median t interval to the two sample problem. Further, we suggest using the bootstrap to find the critical points for use in the calculation of median t intervals. A simulation study has been made to compare the performance of the intervals and a real life example has been considered to illustrate the application of the methods.  相似文献   
47.
Maximum likelihood estimation with incomplete normal nrocedure and (ii.)allows tor a Simple MLI.I v=n,ce nf noofori TiVplilinndv Closed form solutions are described for the general nested case.Exact, ssample, moments are given for the two group case Some. computational comparisions are made with the earlier ESTMAT algorithm.  相似文献   
48.
One important type of question in statistical inference is how to interpret data as evidence. The law of likelihood provides a satisfactory answer in interpreting data as evidence for simple hypotheses, but remains silent for composite hypotheses. This article examines how the law of likelihood can be extended to composite hypotheses within the scope of the likelihood principle. From a system of axioms, we conclude that the strength of evidence for the composite hypotheses should be represented by an interval between lower and upper profiles likelihoods. This article is intended to reveal the connection between profile likelihoods and the law of likelihood under the likelihood principle rather than argue in favor of the use of profile likelihoods in addressing general questions of statistical inference. The interpretation of the result is also discussed.  相似文献   
49.
Various methods exist to calculate confidence intervals for the benchmark dose in risk analysis. This study compares the performance of three such methods in fitting nonlinear dose-response models: the delta method, the likelihood-ratio method, and the bootstrap method. A data set from a developmental toxicity test with continuous, ordinal, and quantal dose-response data is used for the comparison of these methods. Nonlinear dose-response models, with various shapes, were fitted to these data. The results indicate that a few thousand runs are generally needed to get stable confidence limits when using the bootstrap method. Further, the bootstrap and the likelihood-ratio method were found to give fairly similar results. The delta method, however, resulted in some cases in different (usually narrower) intervals, and appears unreliable for nonlinear dose-response models. Since the bootstrap method is more time consuming than the likelihood-ratio method, the latter is more attractive for routine dose-response analysis. In the context of a probabilistic risk assessment the bootstrap method has the advantage that it directly links to Monte Carlo analysis.  相似文献   
50.
Overdispersion or extra variation is a common phenomenon that occurs when binomial (multinomial) data exhibit larger variances than that permitted by the binomial (multinomial) model. This arises when the data are clustered or when the assumption of independence is violated. Goodness-of-fit (GOF) tests available in the overdispersion literature have focused on testing for the presence of overdispersion in the data and hence they are not applicable for choosing between the several competing overdispersion models. In this paper, we consider a GOF test proposed by Neerchal and Morel [1998. Large cluster results for two parametric multinomial extra variation models. J. Amer. Statist. Assoc. 93(443), 1078–1087], and study its distributional properties and performance characteristics. This statistic is a direct analogue of the usual Pearson chi-squared statistic, but is also applicable when the clusters are not necessarily of the same size. As this test statistic is for testing model adequacy against the alternative that the model is not adequate, it is applicable in testing two competing overdispersion models.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号