首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4080篇
  免费   88篇
  国内免费   14篇
管理学   234篇
民族学   1篇
人口学   60篇
丛书文集   44篇
理论方法论   26篇
综合类   385篇
社会学   38篇
统计学   3394篇
  2024年   1篇
  2023年   21篇
  2022年   37篇
  2021年   25篇
  2020年   71篇
  2019年   146篇
  2018年   163篇
  2017年   271篇
  2016年   132篇
  2015年   88篇
  2014年   117篇
  2013年   1185篇
  2012年   360篇
  2011年   117篇
  2010年   126篇
  2009年   138篇
  2008年   130篇
  2007年   99篇
  2006年   104篇
  2005年   93篇
  2004年   85篇
  2003年   75篇
  2002年   72篇
  2001年   66篇
  2000年   61篇
  1999年   61篇
  1998年   57篇
  1997年   43篇
  1996年   24篇
  1995年   21篇
  1994年   28篇
  1993年   19篇
  1992年   25篇
  1991年   9篇
  1990年   15篇
  1989年   9篇
  1988年   17篇
  1987年   8篇
  1986年   6篇
  1985年   4篇
  1984年   12篇
  1983年   13篇
  1982年   6篇
  1981年   5篇
  1980年   1篇
  1979年   6篇
  1978年   5篇
  1977年   2篇
  1975年   2篇
  1973年   1篇
排序方式: 共有4182条查询结果,搜索用时 0 毫秒
101.
102.
We compare the performance of seven robust estimators for the parameter of an exponential distribution. These include the debiased median and two optimally-weighted one-sided trimmed means. We also introduce four new estimators: the Transform, Bayes, Scaled and Bicube estimators. We make the Monte Carlo comparisons for three sample sizes and six situations. We evaluate the comparisons in terms of a new performance measure, Mean Absolute Differential Error (MADE), and a premium/protection interpretation of MADE. We organize the comparisons to enhance statistical power by making maximal use of common random deviates. The Transform estimator provides the best performance as judged by MADE. The singly-trimmed mean and Transform method define the efficient frontier of premium/protection.  相似文献   
103.
Recently, several new applications of control chart procedures for short production runs have been introduced. Bothe (1989) and Burr (1989) proposed the use of control chart statistics which are obtained by scaling the quality characteristic by target values or process estimates of a location and scale parameter. The performance of these control charts can be significantly affected by the use of incorrect scaling parameters, resulting in either an excessive "false alarm rate," or insensitivity to the detection of moderate shifts in the process. To correct for these deficiencies, Quesenberry (1990, 1991) has developed the Q-Chart which is formed from running process estimates of the sample mean and variance. For the case where both the process mean and variance are unknown, the Q-chaxt statistic is formed from the standard inverse Z-transformation of a t-statistic. Q-charts do not perform correctly, however, in the presence of special cause disturbances at process startup. This has recently been supported by results published by Del Castillo and Montgomery (1992), who recommend the use of an alternative control chart procedure which is based upon a first-order adaptive Kalman filter model Consistent with the recommendations by Castillo and Montgomery, we propose an alternative short run control chart procedure which is based upon the second order dynamic linear model (DLM). The control chart is shown to be useful for the early detection of unwanted process trends. Model and control chart parameters are updated sequentially in a Bayesian estimation framework, providing the greatest degree of flexibility in the level of prior information which is incorporated into the model. The result is a weighted moving average control chart statistic which can be used to provide running estimates of process capability. The average run length performance of the control chart is compared to the optimal performance of the exponentially weighted moving average (EWMA) chart, as reported by Gan (1991). Using a simulation approach, the second order DLM control chart is shown to provide better overall performance than the EWMA for short production run applications  相似文献   
104.
We compare minimum Hellinger distance and minimum Heiiinger disparity estimates for U-shaped beta distributions. Given suitable density estimates, both methods are known to be asymptotically efficient when the data come from the assumed model family, and robust to small perturbations from the model family. Most implementations use kernel density estimates, which may not be appropriate for U-shaped distributions. We compare fixed binwidth histograms, percentile mesh histograms, and averaged shifted histograms. Minimum disparity estimates are less sensitive to the choice of density estimate than are minimum distance estimates, and the percentile mesh histogram gives the best results for both minimum distance and minimum disparity estimates. Minimum distance estimates are biased and a bias-corrected method is proposed. Minimum disparity estimates and bias-corrected minimum distance estimates are comparable to maximum likelihood estimates when the model holds, and give better results than either method of moments or maximum likelihood when the data are discretized or contaminated, Although our re¬sults are for the beta density, the implementations are easily modified for other U-shaped distributions such as the Dirkhlet or normal generated distribution.  相似文献   
105.
Here, we consider wavelet based estimation of the derivatives of a probability density function under random sampling from a weighted distribution and extend the results regarding the asymptotic convergence rates under the i.i.d. setup studied in Prakasa Rao (1996 Rao, B. L.S. (1996). Nonparametric estimation of the derivatives of a density by the method of wavelets. Bull. Inform. Cybernat. 28:91100. [Google Scholar]) to the biased-data setup. We compare the performance of the wavelet based estimator with that of the kernel based estimator obtained by differentiating the Efromovich (2004 Efromovich, S. (2004). Density estimation for biased data. Ann. Statist. 32:11371161.[Crossref], [Web of Science ®] [Google Scholar]) kernel density estimator through a simulation study.  相似文献   
106.
ABSTRACT

Every large census operation should undergo evaluation programs to find the sources and extent of inherent coverage errors. In this article, we briefly discuss the statistical methodology to estimate the omission rate in Indian census using dual-system estimation (DSE) technique. We have explicitly studied the correlation bias factor involved in the estimate, its extent, and consequences. A new potential source of bias in the estimate is identified and discussed. During the survey, more efficient enumerators compared to the census operations are appointed, and this fact may inflate the dependency between two lists and lead to a significant bias. Some examples are given to demonstrate this argument in various plausible situations. We have suggested one simple and flexible approach which can control this bias. Our proposed estimator can efficiently overcome the potential bias by achieving the desired degree of accuracy (almost unbiased) with relatively higher efficiency. Overall improvements in the results are explored through simulation study on different populations.  相似文献   
107.
Abstract

We suggest shrinkage based technique for estimating covariance matrix in the high-dimensional normal model with missing data. Our approach is based on the monotone missing scheme assumption, meaning that missing values patterns occur completely at random. Our asymptotic framework allows the dimensionality p grow to infinity together with the sample size, N, and extends the methodology of Ledoit and Wolf (2004) Ledoit, O., Wolf, M. (2004). A well-conditioned estimator for large dimensional covariance matrices. J. Multivariate Anal. 88:365411.[Crossref], [Web of Science ®] [Google Scholar] to the case of two-step monotone missing data. Two new shrinkage-type estimators are derived and their dominance properties over the Ledoit and Wolf (2004) Ledoit, O., Wolf, M. (2004). A well-conditioned estimator for large dimensional covariance matrices. J. Multivariate Anal. 88:365411.[Crossref], [Web of Science ®] [Google Scholar] estimator are shown under the expected quadratic loss. We perform a simulation study and conclude that the proposed estimators are successful for a range of missing data scenarios.  相似文献   
108.
Abstract

Frailty models are used in survival analysis to account for unobserved heterogeneity in individual risks to disease and death. To analyze bivariate data on related survival times (e.g., matched pairs experiments, twin, or family data), shared frailty models were suggested. Shared frailty models are frequently used to model heterogeneity in survival analysis. The most common shared frailty model is a model in which hazard function is a product of random factor(frailty) and baseline hazard function which is common to all individuals. There are certain assumptions about the baseline distribution and distribution of frailty. In this paper, we introduce shared gamma frailty models with reversed hazard rate. We introduce Bayesian estimation procedure using Markov Chain Monte Carlo (MCMC) technique to estimate the parameters involved in the model. We present a simulation study to compare the true values of the parameters with the estimated values. Also, we apply the proposed model to the Australian twin data set.  相似文献   
109.
Abstract

In this article, we have considered the problem of estimation of population variance on current (second) occasion in two occasion successive (rotation) sampling. A class of estimators of population variance has been proposed and its asymptotic properties have been discussed. The proposed class of estimators is compared with the sample variance estimator when there is no matching from the previous occasion and the Singh et al. (2013) estimator. Optimum replacement policy is discussed. It has been shown that the suggested estimator is more efficient than the Singh et al. (2013) estimator and a usual unbiased estimator when there is no matching. An empirical study is carried out in support of the present study.  相似文献   
110.
ABSTRACT

In this article, we consider the estimation of R = P(Y < X), when Y and X are two independent three-parameter Lindley (LI) random variables. On the basis of two independent samples, the modified maximum likelihood estimator along its asymptotic behavior and conditional likelihood-based estimator are used to estimate R. We also propose sample-based estimate of R and the associated credible interval based on importance sampling procedure. A real life data set involving the times to breakdown of an insulating fluid is presented and analyzed for illustrative purposes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号