首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5217篇
  免费   108篇
  国内免费   27篇
管理学   291篇
民族学   1篇
人口学   70篇
丛书文集   68篇
理论方法论   32篇
综合类   1130篇
社会学   72篇
统计学   3688篇
  2024年   3篇
  2023年   25篇
  2022年   41篇
  2021年   32篇
  2020年   87篇
  2019年   169篇
  2018年   187篇
  2017年   302篇
  2016年   162篇
  2015年   114篇
  2014年   182篇
  2013年   1308篇
  2012年   408篇
  2011年   155篇
  2010年   175篇
  2009年   189篇
  2008年   179篇
  2007年   142篇
  2006年   147篇
  2005年   135篇
  2004年   116篇
  2003年   92篇
  2002年   97篇
  2001年   94篇
  2000年   102篇
  1999年   97篇
  1998年   93篇
  1997年   83篇
  1996年   69篇
  1995年   52篇
  1994年   52篇
  1993年   34篇
  1992年   49篇
  1991年   27篇
  1990年   25篇
  1989年   23篇
  1988年   25篇
  1987年   11篇
  1986年   6篇
  1985年   5篇
  1984年   12篇
  1983年   14篇
  1982年   6篇
  1981年   5篇
  1979年   7篇
  1978年   5篇
  1977年   4篇
  1976年   1篇
  1975年   2篇
  1973年   1篇
排序方式: 共有5352条查询结果,搜索用时 15 毫秒
101.
A Monte Carlo simulation is used to study the performance of hypothesis tests for regression coefficients when least absolute value regression methods are used. In small samples, the results of the simulation suggest that using the bootstrap method to compute standard errors will provide improved test performance  相似文献   
102.
103.
We compare the performance of seven robust estimators for the parameter of an exponential distribution. These include the debiased median and two optimally-weighted one-sided trimmed means. We also introduce four new estimators: the Transform, Bayes, Scaled and Bicube estimators. We make the Monte Carlo comparisons for three sample sizes and six situations. We evaluate the comparisons in terms of a new performance measure, Mean Absolute Differential Error (MADE), and a premium/protection interpretation of MADE. We organize the comparisons to enhance statistical power by making maximal use of common random deviates. The Transform estimator provides the best performance as judged by MADE. The singly-trimmed mean and Transform method define the efficient frontier of premium/protection.  相似文献   
104.
Recently, several new applications of control chart procedures for short production runs have been introduced. Bothe (1989) and Burr (1989) proposed the use of control chart statistics which are obtained by scaling the quality characteristic by target values or process estimates of a location and scale parameter. The performance of these control charts can be significantly affected by the use of incorrect scaling parameters, resulting in either an excessive "false alarm rate," or insensitivity to the detection of moderate shifts in the process. To correct for these deficiencies, Quesenberry (1990, 1991) has developed the Q-Chart which is formed from running process estimates of the sample mean and variance. For the case where both the process mean and variance are unknown, the Q-chaxt statistic is formed from the standard inverse Z-transformation of a t-statistic. Q-charts do not perform correctly, however, in the presence of special cause disturbances at process startup. This has recently been supported by results published by Del Castillo and Montgomery (1992), who recommend the use of an alternative control chart procedure which is based upon a first-order adaptive Kalman filter model Consistent with the recommendations by Castillo and Montgomery, we propose an alternative short run control chart procedure which is based upon the second order dynamic linear model (DLM). The control chart is shown to be useful for the early detection of unwanted process trends. Model and control chart parameters are updated sequentially in a Bayesian estimation framework, providing the greatest degree of flexibility in the level of prior information which is incorporated into the model. The result is a weighted moving average control chart statistic which can be used to provide running estimates of process capability. The average run length performance of the control chart is compared to the optimal performance of the exponentially weighted moving average (EWMA) chart, as reported by Gan (1991). Using a simulation approach, the second order DLM control chart is shown to provide better overall performance than the EWMA for short production run applications  相似文献   
105.
We compare minimum Hellinger distance and minimum Heiiinger disparity estimates for U-shaped beta distributions. Given suitable density estimates, both methods are known to be asymptotically efficient when the data come from the assumed model family, and robust to small perturbations from the model family. Most implementations use kernel density estimates, which may not be appropriate for U-shaped distributions. We compare fixed binwidth histograms, percentile mesh histograms, and averaged shifted histograms. Minimum disparity estimates are less sensitive to the choice of density estimate than are minimum distance estimates, and the percentile mesh histogram gives the best results for both minimum distance and minimum disparity estimates. Minimum distance estimates are biased and a bias-corrected method is proposed. Minimum disparity estimates and bias-corrected minimum distance estimates are comparable to maximum likelihood estimates when the model holds, and give better results than either method of moments or maximum likelihood when the data are discretized or contaminated, Although our re¬sults are for the beta density, the implementations are easily modified for other U-shaped distributions such as the Dirkhlet or normal generated distribution.  相似文献   
106.
Here, we consider wavelet based estimation of the derivatives of a probability density function under random sampling from a weighted distribution and extend the results regarding the asymptotic convergence rates under the i.i.d. setup studied in Prakasa Rao (1996 Rao, B. L.S. (1996). Nonparametric estimation of the derivatives of a density by the method of wavelets. Bull. Inform. Cybernat. 28:91100. [Google Scholar]) to the biased-data setup. We compare the performance of the wavelet based estimator with that of the kernel based estimator obtained by differentiating the Efromovich (2004 Efromovich, S. (2004). Density estimation for biased data. Ann. Statist. 32:11371161.[Crossref], [Web of Science ®] [Google Scholar]) kernel density estimator through a simulation study.  相似文献   
107.
ABSTRACT

Every large census operation should undergo evaluation programs to find the sources and extent of inherent coverage errors. In this article, we briefly discuss the statistical methodology to estimate the omission rate in Indian census using dual-system estimation (DSE) technique. We have explicitly studied the correlation bias factor involved in the estimate, its extent, and consequences. A new potential source of bias in the estimate is identified and discussed. During the survey, more efficient enumerators compared to the census operations are appointed, and this fact may inflate the dependency between two lists and lead to a significant bias. Some examples are given to demonstrate this argument in various plausible situations. We have suggested one simple and flexible approach which can control this bias. Our proposed estimator can efficiently overcome the potential bias by achieving the desired degree of accuracy (almost unbiased) with relatively higher efficiency. Overall improvements in the results are explored through simulation study on different populations.  相似文献   
108.
In this paper, we introduce the concept of the p-mean almost periodicity for stochastic processes in non linear expectation spaces. The existence and uniqueness of square-mean almost periodic solutions to some non linear stochastic differential equations driven by G-Brownian motion are established under some assumptions for the coefficients. The asymptotic stability of the unique square-mean almost periodic solution in the square-mean sense is also discussed.  相似文献   
109.
Abstract

We suggest shrinkage based technique for estimating covariance matrix in the high-dimensional normal model with missing data. Our approach is based on the monotone missing scheme assumption, meaning that missing values patterns occur completely at random. Our asymptotic framework allows the dimensionality p grow to infinity together with the sample size, N, and extends the methodology of Ledoit and Wolf (2004) Ledoit, O., Wolf, M. (2004). A well-conditioned estimator for large dimensional covariance matrices. J. Multivariate Anal. 88:365411.[Crossref], [Web of Science ®] [Google Scholar] to the case of two-step monotone missing data. Two new shrinkage-type estimators are derived and their dominance properties over the Ledoit and Wolf (2004) Ledoit, O., Wolf, M. (2004). A well-conditioned estimator for large dimensional covariance matrices. J. Multivariate Anal. 88:365411.[Crossref], [Web of Science ®] [Google Scholar] estimator are shown under the expected quadratic loss. We perform a simulation study and conclude that the proposed estimators are successful for a range of missing data scenarios.  相似文献   
110.
Abstract

Frailty models are used in survival analysis to account for unobserved heterogeneity in individual risks to disease and death. To analyze bivariate data on related survival times (e.g., matched pairs experiments, twin, or family data), shared frailty models were suggested. Shared frailty models are frequently used to model heterogeneity in survival analysis. The most common shared frailty model is a model in which hazard function is a product of random factor(frailty) and baseline hazard function which is common to all individuals. There are certain assumptions about the baseline distribution and distribution of frailty. In this paper, we introduce shared gamma frailty models with reversed hazard rate. We introduce Bayesian estimation procedure using Markov Chain Monte Carlo (MCMC) technique to estimate the parameters involved in the model. We present a simulation study to compare the true values of the parameters with the estimated values. Also, we apply the proposed model to the Australian twin data set.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号