首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5200篇
  免费   122篇
  国内免费   14篇
管理学   274篇
民族学   3篇
人口学   49篇
丛书文集   62篇
理论方法论   26篇
综合类   835篇
社会学   45篇
统计学   4042篇
  2024年   1篇
  2023年   25篇
  2022年   35篇
  2021年   32篇
  2020年   84篇
  2019年   173篇
  2018年   192篇
  2017年   329篇
  2016年   147篇
  2015年   108篇
  2014年   154篇
  2013年   1471篇
  2012年   409篇
  2011年   135篇
  2010年   143篇
  2009年   181篇
  2008年   173篇
  2007年   127篇
  2006年   126篇
  2005年   134篇
  2004年   111篇
  2003年   87篇
  2002年   85篇
  2001年   86篇
  2000年   94篇
  1999年   102篇
  1998年   80篇
  1997年   70篇
  1996年   65篇
  1995年   55篇
  1994年   54篇
  1993年   30篇
  1992年   43篇
  1991年   12篇
  1990年   33篇
  1989年   17篇
  1988年   29篇
  1987年   9篇
  1986年   6篇
  1985年   10篇
  1984年   17篇
  1983年   17篇
  1982年   11篇
  1981年   8篇
  1980年   2篇
  1979年   8篇
  1978年   7篇
  1977年   3篇
  1975年   5篇
  1973年   1篇
排序方式: 共有5336条查询结果,搜索用时 620 毫秒
131.
132.
This paper focusses on computing the Bayesian reliability of components whose performance characteristics (degradation – fatigue and cracks) are observed during a specified period of time. Depending upon the nature of degradation data collected, we fit a monotone increasing or decreasing function for the data. Since the components are supposed to have different lifetimes, the rate of degradation is assumed to be a random variable. At a critical level of degradation, the time to failure distribution is obtained. The exponential and power degradation models are studied and exponential density function is assumed for the random variable representing the rate of degradation. The maximum likelihood estimator and Bayesian estimator of the parameter of exponential density function, predictive distribution, hierarchical Bayes approach and robustness of the posterior mean are presented. The Gibbs sampling algorithm is used to obtain the Bayesian estimates of the parameter. Illustrations are provided for the train wheel degradation data.  相似文献   
133.
Prior information is often incorporated informally when planning a clinical trial. Here, we present an approach on how to incorporate prior information, such as data from historical clinical trials, into the nuisance parameter–based sample size re‐estimation in a design with an internal pilot study. We focus on trials with continuous endpoints in which the outcome variance is the nuisance parameter. For planning and analyzing the trial, frequentist methods are considered. Moreover, the external information on the variance is summarized by the Bayesian meta‐analytic‐predictive approach. To incorporate external information into the sample size re‐estimation, we propose to update the meta‐analytic‐predictive prior based on the results of the internal pilot study and to re‐estimate the sample size using an estimator from the posterior. By means of a simulation study, we compare the operating characteristics such as power and sample size distribution of the proposed procedure with the traditional sample size re‐estimation approach that uses the pooled variance estimator. The simulation study shows that, if no prior‐data conflict is present, incorporating external information into the sample size re‐estimation improves the operating characteristics compared to the traditional approach. In the case of a prior‐data conflict, that is, when the variance of the ongoing clinical trial is unequal to the prior location, the performance of the traditional sample size re‐estimation procedure is in general superior, even when the prior information is robustified. When considering to include prior information in sample size re‐estimation, the potential gains should be balanced against the risks.  相似文献   
134.
In this article, an integer-valued self-exciting threshold model with a finite range based on the binomial INARCH(1) model is proposed. Important stochastic properties are derived, and approaches for parameter estimation are discussed. A real-data example about the regional spread of public drunkenness in Pittsburgh demonstrates the applicability of the new model in comparison to existing models. Feasible modifications of the model are presented, which are designed to handle special features such as zero-inflation.  相似文献   
135.
The Perron test which is based on a Dickey–Fuller test regression is a commonly employed approach to test for a unit root in the presence of a structural break of unknown timing. In the case of an innovational outlier (IO), the Perron test tends to exhibit spurious rejections in finite samples when the break occurs under the null hypothesis. In the present paper, a new Perron-type IO unit root test is developed. It is shown in Monte Carlo experiments that the new test does not over-reject the null hypothesis. Even for the case of a level and slope break for trending data, the empirical size is near its nominal level. The test distribution equals the case of a known break date. Furthermore, the test is able to identify the true break date very accurately even for small breaks. As an application serves the Nelson–Plosser data set.  相似文献   
136.
The robustness of an extended version of Colton's decision theoretic model is considered. The extended version includes the losses due to the patients who are not entered in the experiment, but require treatment while the experiment is in progress. Among the topics considered are the effects of risk of using a sample size considerably less than the optimum, use of an incorrect patient horizon, application of a modified loss function, and use of a two point prior distribution. It is shown that the investigated model is robust with respect to all these changes with the exception of the use of the modified prior density.  相似文献   
137.
This article considers the non parametric estimation of absolutely continuous distribution functions of independent lifetimes of non identical components in k-out-of-n systems, 2 ? k ? n, from the observed “autopsy” data. In economics, ascending “button” or “clock” auctions with n heterogeneous bidders with independent private values present 2-out-of-n systems. Classical competing risks models are examples of n-out-of-n systems. Under weak conditions on the underlying distributions, the estimation problem is shown to be well-posed and the suggested extremum sieve estimator is proven to be consistent. This article considers the sieve spaces of Bernstein polynomials which allow to easily implement constraints on the monotonicity of estimated distribution functions.  相似文献   
138.
The problem of comparing, contrasting and combining information from different sets of data is an enduring one in many practical applications of statistics. A specific problem of combining information from different sources arose in integrating information from three different sets of data generated by three different sampling campaigns at the input stage as well as at the output stage of a grey-water treatment process. For each stage, a common process trend function needs to be estimated to describe the input and output material process behaviours. Once the common input and output process models are established, it is required to estimate the efficiency of the grey-water treatment method. A synthesized tool for modelling different sets of process data is created by assembling and organizing a number of existing techniques: (i) a mixed model of fixed and random effects, extended to allow for a nonlinear fixed effect, (ii) variogram modelling, a geostatistical technique, (iii) a weighted least squares regression embedded in an iterative maximum-likelihood technique to handle linear/nonlinear fixed and random effects and (iv) a formulation of a transfer-function model for the input and output processes together with a corresponding nonlinear maximum-likelihood method for estimation of a transfer function. The synthesized tool is demonstrated, in a new case study, to contrast and combine information from connected process models and to determine the change in one quality characteristic, namely pH, of the input and output materials of a grey-water filtering process.  相似文献   
139.
This article discusses the minimax estimator in partial linear model y = Zβ + f + ε under ellipsoidal restrictions on the parameter space and quadratic loss function. The superiority of the minimax estimator over the two-step estimator is studied in the mean squared error matrix criterion.  相似文献   
140.
This article considers the issue of performing tests in linear heteroskedastic models when the test statistic employs a consistent variance estimator. Several different estimators are considered, namely: HC0, HC1, HC2, HC3, and their bias-adjusted versions. The numerical evaluation is performed using numerical integration methods; the Imhof algorithm is used to that end. The results show that bias-adjustment of variance estimators used to construct test statistics delivers more reliable tests when they are performed for the HC0 and HC1 estimators, but the same does not hold for the HC3 estimator. Overall, the most reliable test is the HC3-based one.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号