首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   592篇
  免费   23篇
管理学   17篇
丛书文集   1篇
综合类   4篇
社会学   1篇
统计学   592篇
  2022年   4篇
  2021年   2篇
  2020年   6篇
  2019年   19篇
  2018年   31篇
  2017年   54篇
  2016年   20篇
  2015年   17篇
  2014年   15篇
  2013年   193篇
  2012年   62篇
  2011年   13篇
  2010年   17篇
  2009年   17篇
  2008年   22篇
  2007年   11篇
  2006年   15篇
  2005年   15篇
  2004年   7篇
  2003年   8篇
  2002年   4篇
  2001年   7篇
  2000年   8篇
  1999年   10篇
  1998年   8篇
  1997年   5篇
  1996年   5篇
  1995年   4篇
  1994年   2篇
  1993年   4篇
  1992年   3篇
  1990年   1篇
  1989年   1篇
  1988年   2篇
  1984年   1篇
  1982年   1篇
  1981年   1篇
排序方式: 共有615条查询结果,搜索用时 640 毫秒
41.
In the study of the stochastic behaviour of the lifetime of an element as a function of its length, it is often observed that the failure time (or lifetime) decreases as the length increases. In probabilistic terms, such an idea can be expressed as follows. Let T be the lifetime of a specimen of length x, so the survival function, which denotes the probability that an element of length x survives till time t, will be given by ST (t, x) = P(T > t/α(x), where α(x) is a monotonically decreasing function. In particular, it is often assumed that T has a Weibull distribution. In this paper, we propose a generalization of this Weibull model by assuming that the distribution of T is Generalized gamma (GG). Since the GG model contains the Weibull, Gamma and Lognormal models as special and limiting cases, a GG regression model is an appropriate tool for describing the size effect on the lifetime and for selecting among the embedded models. Maximum likelihood estimates are obtained for the GG regression model with α(x) = cxb . As a special case this provide an alternative to the usual approach to estimation for the GG distribution which involves reparametrization. Related parametric inference issues are addressed and illustrated using two experimental data sets. Some discussion of censored data is also provided.  相似文献   
42.
In this paper, we consider the simple step-stress model for a two-parameter exponential distribution, when both the parameters are unknown and the data are Type-II censored. It is assumed that under two different stress levels, the scale parameter only changes but the location parameter remains unchanged. It is observed that the maximum likelihood estimators do not always exist. We obtain the maximum likelihood estimates of the unknown parameters whenever they exist. We provide the exact conditional distributions of the maximum likelihood estimators of the scale parameters. Since the construction of the exact confidence intervals is very difficult from the conditional distributions, we propose to use the observed Fisher Information matrix for this purpose. We have suggested to use the bootstrap method for constructing confidence intervals. Bayes estimates and associated credible intervals are obtained using the importance sampling technique. Extensive simulations are performed to compare the performances of the different confidence and credible intervals in terms of their coverage percentages and average lengths. The performances of the bootstrap confidence intervals are quite satisfactory even for small sample sizes.  相似文献   
43.
A methodology is developed for estimating consumer acceptance limits on a sensory attribute of a manufactured product. In concept these limits are analogous to engineering tolerances. The method is based on a generalization of Stevens' Power Law. This generalized law is expressed as a nonlinear statistical model. Instead of restricting the analysis to this particular case, a strategy is discussed for evaluating nonlinear models in general since scientific models are frequently of nonlinear form. The strategy focuses on understanding the geometrical contrasts between linear and nonlinear model estimation and assessing the bias in estimation and the departures from a Gaussian sampling distribution. Computer simulation is employed to examine the behavior of nonlinear least squares estimation. In addition to the usual Gaussian assumption, a bootstrap sample reuse procedure and a general triangular distribution are introduced for evaluating the effects of a non-Gaussian or asymmetrical error structure. Recommendations are given for further model analysis based on the simulation results. In the case of a model for which estimation bias is not a serious issue, estimating functions of the model are considered. Application of these functions to the generalization of Stevens’ Power Law leads to a means for defining and estimating consumer acceptance limits, The statistical form of the law and the model evaluation strategy are applied to consumer research data. Estimation of consumer acceptance limits is illustrated and discussed.  相似文献   
44.
A generalized random coefficient first-order integer-valued autoregressive process with signed thinning operator is introduced, this kind of process is appropriate for modeling negative integer-valued time series. Strict stationarity and ergodicity of process are established. Estimators of the parameters of interest are derived and their properties are studied via simulation. At last, we use bootstrap method in the real data analysis.  相似文献   
45.
Comments     

In this paper we compare Bartlett-corrected, bootstrap, and fast double bootstrap tests on maximum likelihood estimates of cointegration parameters. The key result is that both the bootstrap and the Bartlett-corrected tests must be based on the unrestricted estimates of the cointegrating vectors: procedures based on the restricted estimates have almost no power. The small sample size bias of the asymptotic test appears so severe as to advise strongly against its use with the sample sizes commonly available; the fast double bootstrap test minimizes size bias, while the Bartlett-corrected test is somehow more powerful.  相似文献   
46.
We give a critical synopsis of classical and recent tests for Poissonity, our emphasis being on procedures which are consistent against general alternatives. Two classes of weighted Cramér–von Mises type test statistics, based on the empirical probability generating function process, are studied in more detail. Both of them generalize already known test statistics by introducing a weighting parameter, thus providing more flexibility with regard to power against specific alternatives. In both cases, we prove convergence in distribution of the statistics under the null hypothesis in the setting of a triangular array of rowwise independent and identically distributed random variables as well as consistency of the corresponding test against general alternatives. Therefore, a sound theoretical basis is provided for the parametric bootstrap procedure, which is applied to obtain critical values in a large-scale simulation study. Each of the tests considered in this study, when implemented via the parametric bootstrap method, maintains a nominal level of significance very closely, even for small sample sizes. The procedures are applied to four well-known data sets.  相似文献   
47.
Methods for assessing the variability of an estimated contour of a density are discussed. A new method called the coverage plot is proposed. Techniques including sectioning and bootstrap techniques are compared for a particular problem which arises in Monte Carlo simulation approaches to estimating the spatial distribution of risk in the operation of weapons firing ranges. It is found that, for computational reasons, the sectioning procedure outperforms the bootstrap for this problem. The roles of bias and sample size are also seen in the examples shown.  相似文献   
48.
Abstract

A method for obtaining bootstrapping replicates for one-dimensional point processes is presented. The method involves estimating the conditional intensity of the process and computing residuals. The residuals are bootstrapped using a block bootstrap and used, together with the conditional intensity, to define the bootstrap realizations. The method is applied to the estimation of the cross-intensity function for data arising from a reaction time experiment.  相似文献   
49.
Conventional approaches for inference about efficiency in parametric stochastic frontier (PSF) models are based on percentiles of the estimated distribution of the one-sided error term, conditional on the composite error. When used as prediction intervals, coverage is poor when the signal-to-noise ratio is low, but improves slowly as sample size increases. We show that prediction intervals estimated by bagging yield much better coverages than the conventional approach, even with low signal-to-noise ratios. We also present a bootstrap method that gives confidence interval estimates for (conditional) expectations of efficiency, and which have good coverage properties that improve with sample size. In addition, researchers who estimate PSF models typically reject models, samples, or both when residuals have skewness in the “wrong” direction, i.e., in a direction that would seem to indicate absence of inefficiency. We show that correctly specified models can generate samples with “wrongly” skewed residuals, even when the variance of the inefficiency process is nonzero. Both our bagging and bootstrap methods provide useful information about inefficiency and model parameters irrespective of whether residuals have skewness in the desired direction.  相似文献   
50.
A problem arising from the study of the spread of a viral infection among potato plants by aphids appears to involve a mixture of two linear regressions on a single predictor variable. The plant scientists studying the problem were particularly interested in obtaining a 95% confidence upper bound for the infection rate. We discuss briefly the procedure for fitting mixtures of regression models by means of maximum likelihood, effected via the EM algorithm. We give general expressions for the implementation of the M-step and then address the issue of conducting statistical inference in this context. A technique due to T. A. Louis may be used to estimate the covariance matrix of the parameter estimates by calculating the observed Fisher information matrix. We develop general expressions for the entries of this information matrix. Having the complete covariance matrix permits the calculation of confidence and prediction bands for the fitted model. We also investigate the testing of hypotheses concerning the number of components in the mixture via parametric and 'semiparametric' bootstrapping. Finally, we develop a method of producing diagnostic plots of the residuals from a mixture of linear regressions.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号