首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3516篇
  免费   90篇
  国内免费   26篇
管理学   221篇
劳动科学   1篇
民族学   10篇
人才学   1篇
人口学   47篇
丛书文集   109篇
理论方法论   38篇
综合类   1038篇
社会学   32篇
统计学   2135篇
  2024年   3篇
  2023年   13篇
  2022年   20篇
  2021年   24篇
  2020年   53篇
  2019年   88篇
  2018年   114篇
  2017年   177篇
  2016年   74篇
  2015年   78篇
  2014年   133篇
  2013年   861篇
  2012年   282篇
  2011年   139篇
  2010年   153篇
  2009年   117篇
  2008年   141篇
  2007年   157篇
  2006年   143篇
  2005年   110篇
  2004年   108篇
  2003年   86篇
  2002年   74篇
  2001年   85篇
  2000年   78篇
  1999年   48篇
  1998年   39篇
  1997年   31篇
  1996年   38篇
  1995年   27篇
  1994年   14篇
  1993年   20篇
  1992年   20篇
  1991年   10篇
  1990年   14篇
  1989年   9篇
  1988年   10篇
  1987年   2篇
  1986年   4篇
  1985年   5篇
  1984年   3篇
  1983年   4篇
  1982年   4篇
  1981年   1篇
  1980年   8篇
  1979年   4篇
  1978年   5篇
  1976年   1篇
排序方式: 共有3632条查询结果,搜索用时 671 毫秒
121.
An approach to the analysis of time-dependent ordinal quality score data from robust design experiments is developed and applied to an experiment from commercial horticultural research, using concepts of product robustness and longevity that are familiar to analysts in engineering research. A two-stage analysis is used to develop models describing the effects of a number of experimental treatments on the rate of post-sales product quality decline. The first stage uses a polynomial function on a transformed scale to approximate the quality decline for an individual experimental unit using derived coefficients and the second stage uses a joint mean and dispersion model to investigate the effects of the experimental treatments on these derived coefficients. The approach, developed specifically for an application in horticulture, is exemplified with data from a trial testing ornamental plants that are subjected to a range of treatments during production and home-life. The results of the analysis show how a number of control and noise factors affect the rate of post-production quality decline. Although the model is used to analyse quality data from a trial on ornamental plants, the approach developed is expected to be more generally applicable to a wide range of other complex production systems.  相似文献   
122.
Abstract.  A new multiple testing procedure, the generalized augmentation procedure (GAUGE), is introduced. The procedure is shown to control the false discovery exceedance and to be competitive in terms of power. It is also shown how to apply the idea of GAUGE to achieve control of other error measures. Extensions to dependence are discussed, together with a modification valid under arbitrary dependence. We present an application to an original study on prostate cancer and on a benchmark data set on colon cancer.  相似文献   
123.
For capture–recapture models when covariates are subject to measurement errors and missing data, a set of estimating equations is constructed to estimate population size and relevant parameters. These estimating equations can be solved by an algorithm similar to the EM algorithm. The proposed method is also applicable to the situation when covariates with no measurement errors have missing data. Simulation studies are used to assess the performance of the proposed estimator. The estimator is also applied to a capture–recapture experiment on the bird species Prinia flaviventris in Hong Kong. The Canadian Journal of Statistics 37: 645–658; 2009 © 2009 Statistical Society of Canada  相似文献   
124.
Nonparametric density estimation in the presence of measurement error is considered. The usual kernel deconvolution estimator seeks to account for the contamination in the data by employing a modified kernel. In this paper a new approach based on a weighted kernel density estimator is proposed. Theoretical motivation is provided by the existence of a weight vector that perfectly counteracts the bias in density estimation without generating an excessive increase in variance. In practice a data driven method of weight selection is required. Our strategy is to minimize the discrepancy between a standard kernel estimate from the contaminated data on the one hand, and the convolution of the weighted deconvolution estimate with the measurement error density on the other hand. We consider a direct implementation of this approach, in which the weights are optimized subject to sum and non-negativity constraints, and a regularized version in which the objective function includes a ridge-type penalty. Numerical tests suggest that the weighted kernel estimation can lead to tangible improvements in performance over the usual kernel deconvolution estimator. Furthermore, weighted kernel estimates are free from the problem of negative estimation in the tails that can occur when using modified kernels. The weighted kernel approach generalizes to the case of multivariate deconvolution density estimation in a very straightforward manner.  相似文献   
125.
Semiparametric regression models that use spline basis functions with penalization have graphical model representations. This link is more powerful than previously established mixed model representations of semiparametric regression, as a larger class of models can be accommodated. Complications such as missingness and measurement error are more naturally handled within the graphical model architecture. Directed acyclic graphs, also known as Bayesian networks, play a prominent role. Graphical model-based Bayesian 'inference engines', such as bugs and vibes , facilitate fitting and inference. Underlying these are Markov chain Monte Carlo schemes and recent developments in variational approximation theory and methodology.  相似文献   
126.
In this article, we have developed asymptotic theory for the simultaneous estimation of the k means of arbitrary populations under the common mean hypothesis and further assuming that corresponding population variances are unknown and unequal. The unrestricted estimator, the Graybill-Deal-type restricted estimator, the preliminary test, and the Stein-type shrinkage estimators are suggested. A large sample test statistic is also proposed as a pretest for testing the common mean hypothesis. Under the sequence of local alternatives and squared error loss, we have compared the asymptotic properties of the estimators by means of asymptotic distributional quadratic bias and risk. Comprehensive Monte-Carlo simulation experiments were conducted to study the relative risk performance of the estimators with reference to the unrestricted estimator in finite samples. Two real-data examples are also furnished to illustrate the application of the suggested estimation strategies.  相似文献   
127.
This study constructs a simultaneous confidence region for two combinations of coefficients of linear models and their ratios based on the concept of generalized pivotal quantities. Many biological studies, such as those on genetics, assessment of drug effectiveness, and health economics, are interested in a comparison of several dose groups with a placebo group and the group ratios. The Bonferroni correction and the plug-in method based on the multivariate-t distribution have been proposed for the simultaneous region estimation. However, the two methods are asymptotic procedures, and their performance in finite sample sizes has not been thoroughly investigated. Based on the concept of generalized pivotal quantity, we propose a Bonferroni correction procedure and a generalized variable (GV) procedure to construct the simultaneous confidence regions. To address a genetic concern of the dominance ratio, we conduct a simulation study to empirically investigate the probability coverage and expected length of the methods for various combinations of sample sizes and values of the dominance ratio. The simulation results demonstrate that the simultaneous confidence region based on the GV procedure provides sufficient coverage probability and reasonable expected length. Thus, it can be recommended in practice. Numerical examples using published data sets illustrate the proposed methods.  相似文献   
128.
Eunju Hwang 《Statistics》2017,51(4):904-920
In long-memory data sets such as the realized volatilities of financial assets, a sequential test is developed for the detection of structural mean breaks. The long memory, if any, is adjusted by fitting an HAR (heterogeneous autoregressive) model to the data sets and taking the residuals. Our test consists of applying the sequential test of Bai and Perron [Estimating and testing linear models with multiple structural changes. Econometrica. 1998;66:47–78] to the residuals. The large-sample validity of the proposed test is investigated in terms of the consistency of the estimated number of breaks and the asymptotic null distribution of the proposed test. A finite-sample Monte-Carlo experiment reveals that the proposed test tends to produce an unbiased break time estimate, while the usual sequential test of Bai and Perron tends to produce biased break times in the case of long memory. The experiment also reveals that the proposed test has a more stable size than the Bai and Perron test. The proposed test is applied to two realized volatility data sets of the S&P index and the Korea won-US dollar exchange rate for the past 7 years and finds 2 or 3 breaks, while the Bai and Perron test finds 8 or more breaks.  相似文献   
129.
Random effects model can account for the lack of fitting a regression model and increase precision of estimating area‐level means. However, in case that the synthetic mean provides accurate estimates, the prior distribution may inflate an estimation error. Thus, it is desirable to consider the uncertain prior distribution, which is expressed as the mixture of a one‐point distribution and a proper prior distribution. In this paper, we develop an empirical Bayes approach for estimating area‐level means, using the uncertain prior distribution in the context of a natural exponential family, which we call the empirical uncertain Bayes (EUB) method. The regression model considered in this paper includes the Poisson‐gamma and the binomial‐beta, and the normal‐normal (Fay–Herriot) model, which are typically used in small area estimation. We obtain the estimators of hyperparameters based on the marginal likelihood by using a well‐known expectation‐maximization algorithm and propose the EUB estimators of area means. For risk evaluation of the EUB estimator, we derive a second‐order unbiased estimator of a conditional mean squared error by using some techniques of numerical calculation. Through simulation studies and real data applications, we evaluate a performance of the EUB estimator and compare it with the usual empirical Bayes estimator.  相似文献   
130.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号