首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2363篇
  免费   63篇
  国内免费   8篇
管理学   199篇
人口学   22篇
丛书文集   14篇
理论方法论   40篇
综合类   157篇
社会学   158篇
统计学   1844篇
  2024年   4篇
  2023年   16篇
  2022年   14篇
  2021年   20篇
  2020年   47篇
  2019年   65篇
  2018年   81篇
  2017年   148篇
  2016年   71篇
  2015年   54篇
  2014年   94篇
  2013年   584篇
  2012年   145篇
  2011年   67篇
  2010年   75篇
  2009年   72篇
  2008年   95篇
  2007年   85篇
  2006年   73篇
  2005年   80篇
  2004年   61篇
  2003年   63篇
  2002年   53篇
  2001年   52篇
  2000年   50篇
  1999年   50篇
  1998年   48篇
  1997年   29篇
  1996年   24篇
  1995年   13篇
  1994年   11篇
  1993年   9篇
  1992年   13篇
  1991年   6篇
  1990年   5篇
  1989年   10篇
  1988年   6篇
  1987年   1篇
  1986年   7篇
  1985年   8篇
  1984年   8篇
  1983年   3篇
  1982年   5篇
  1981年   7篇
  1979年   1篇
  1975年   1篇
排序方式: 共有2434条查询结果,搜索用时 15 毫秒
41.
In this paper, we study the identification of Bayesian regression models, when an ordinal covariate is subject to unidirectional misclassification. Xia and Gustafson [Bayesian regression models adjusting for unidirectional covariate misclassification. Can J Stat. 2016;44(2):198–218] obtained model identifiability for non-binary regression models, when there is a binary covariate subject to unidirectional misclassification. In the current paper, we establish the moment identifiability of regression models for misclassified ordinal covariates with more than two categories, based on forms of observable moments. Computational studies are conducted that confirm the theoretical results. We apply the method to two datasets, one from the Medical Expenditure Panel Survey (MEPS), and the other from Translational Research Investigating Underlying Disparities in Acute Myocardial infarction Patients Health Status (TRIUMPH).  相似文献   
42.
Statistical process monitoring (SPM) is a very efficient tool to maintain and to improve the quality of a product. In many industrial processes, end product has two or more attribute-type quality characteristics. Some of them are independent, but the observations are Markovian dependent. It is essential to develop a control chart for such situations. In this article, we develop an Independent Attributes Control Chart for Markov Dependent Processes based on error probabilities criterion under the assumption of one-step Markov dependency. Implementation of the chart is similar to that of Shewhart-type chart. Performance of the chart has been studied using probability of detecting shift criterion. A procedure to identify the attribute(s) responsible for out-of-control status of the process is given.  相似文献   
43.
Outlier detection algorithms are intimately connected with robust statistics that down‐weight some observations to zero. We define a number of outlier detection algorithms related to the Huber‐skip and least trimmed squares estimators, including the one‐step Huber‐skip estimator and the forward search. Next, we review a recently developed asymptotic theory of these. Finally, we analyse the gauge, the fraction of wrongly detected outliers, for a number of outlier detection algorithms and establish an asymptotic normal and a Poisson theory for the gauge.  相似文献   
44.
This paper studies the effects of non-normality and autocorrelation on the performances of various individuals control charts for monitoring the process mean and/or variance. The traditional Shewhart X chart and moving range (MR) chart are investigated as well as several types of exponentially weighted moving average (EWMA) charts and combinations of control charts involving these EWMA charts. It is shown that the combination of the X and MR charts will not detect small and moderate parameter shifts as fast as combinations involving the EWMA charts, and that the performana of the X and MR charts is very sensitive to the normality assumption. It is also shown that certain combinations of EWMA charts can be designed to be robust to non-normality and very effective at detecting small and moderate shifts in the process mean and/or variance. Although autocorrelation can have a significant effect on the in-control performances of these combinations of EWMA charts, their relative out-of-control performances under independence are generally maintained for low to moderate levels of autocorrelation.  相似文献   
45.

Item response models are essential tools for analyzing results from many educational and psychological tests. Such models are used to quantify the probability of correct response as a function of unobserved examinee ability and other parameters explaining the difficulty and the discriminatory power of the questions in the test. Some of these models also incorporate a threshold parameter for the probability of the correct response to account for the effect of guessing the correct answer in multiple choice type tests. In this article we consider fitting of such models using the Gibbs sampler. A data augmentation method to analyze a normal-ogive model incorporating a threshold guessing parameter is introduced and compared with a Metropolis-Hastings sampling method. The proposed method is an order of magnitude more efficient than the existing method. Another objective of this paper is to develop Bayesian model choice techniques for model discrimination. A predictive approach based on a variant of the Bayes factor is used and compared with another decision theoretic method which minimizes an expected loss function on the predictive space. A classical model choice technique based on a modified likelihood ratio test statistic is shown as one component of the second criterion. As a consequence the Bayesian methods proposed in this paper are contrasted with the classical approach based on the likelihood ratio test. Several examples are given to illustrate the methods.  相似文献   
46.
In some situations, an appropriate quality measure uses three or more discrete levels to classify a product characteristic. For these situations, some control charts have been developed based on statistical criteria regardless of economic considerations. In this paper, we develop economic and economic statistical designs (ESD) for 3-level control charts. We apply the cost model proposed by Costa and Rahim.[Economic design of X charts with variable parameters: the Markov chain approach, J Appl Stat 28 (2001), 875–885] Furthermore, we assume that the length of time that the process remains in control is exponentially distributed which allows us to apply the Markov chain approach for developing the cost model. We apply a genetic algorithm to determine the optimal values of model parameters by minimizing the cost function. A numerical example is provided to illustrate the performance of the proposed models and to compare the cost of the pure economic and ESD for three-level control charts. A sensitivity analysis is also conducted in this numerical example.  相似文献   
47.
The standard frequency domain approximation to the Gaussian likelihood of a sample from an ARMA process is considered. The Newton-Raphson and Gauss-Newton numerical maximisation algorithms are evaluated for this approximate likelihood and the relationships between these algorithms and those of Akaike and Hannan explored. In particular it is shown that Hannan's method has certain computational advantages compared to the other spectral estimation methods considered  相似文献   
48.
The estimation of incremental cost–effectiveness ratio (ICER) has received increasing attention recently. It is expressed in terms of the ratio of the change in costs of a therapeutic intervention to the change in the effects of the intervention. Despite the intuitive interpretation of ICER as an additional cost per additional benefit unit, it is a challenge to estimate the distribution of a ratio of two stochastically dependent distributions. A vast literature regarding the statistical methods of ICER has developed in the past two decades, but none of these methods provide an unbiased estimator. Here, to obtain the unbiased estimator of the cost–effectiveness ratio (CER), the zero intercept of the bivariate normal regression is assumed. In equal sample sizes, the Iman–Conover algorithm is applied to construct the desired variance–covariance matrix of two random bivariate samples, and the estimation then follows the same approach as CER to obtain the unbiased estimator of ICER. The bootstrapping method with the Iman–Conover algorithm is employed for unequal sample sizes. Simulation experiments are conducted to evaluate the proposed method. The regression-type estimator performs overwhelmingly better than the sample mean estimator in terms of mean squared error in all cases.  相似文献   
49.
Bayesian analysis often requires the researcher to employ Markov Chain Monte Carlo (MCMC) techniques to draw samples from a posterior distribution which in turn is used to make inferences. Currently, several approaches to determine convergence of the chain as well as sensitivities of the resulting inferences have been developed. This work develops a Hellinger distance approach to MCMC diagnostics. An approximation to the Hellinger distance between two distributions f and g based on sampling is introduced. This approximation is studied via simulation to determine the accuracy. A criterion for using this Hellinger distance for determining chain convergence is proposed as well as a criterion for sensitivity studies. These criteria are illustrated using a dataset concerning the Anguilla australis, an eel native to New Zealand.  相似文献   
50.
This article analyses diffusion-type processes from a new point-of-view. Consider two statistical hypotheses on a diffusion process. We do not use a classical test to reject or accept one hypothesis using the Neyman–Pearson procedure and do not involve Bayesian approach. As an alternative, we propose using a likelihood paradigm to characterizing the statistical evidence in support of these hypotheses. The method is based on evidential inference introduced and described by Royall [Royall R. Statistical evidence: a likelihood paradigm. London: Chapman and Hall; 1997]. In this paper, we extend the theory of Royall to the case when data are observations from a diffusion-type process instead of iid observations. The empirical distribution of likelihood ratio is used to formulate the probability of strong, misleading and weak evidences. Since the strength of evidence can be affected by the sampling characteristics, we present a simulation study that demonstrates these effects. Also we try to control misleading evidence and reduce them by adjusting these characteristics. As an illustration, we apply the method to the Microsoft stock prices.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号