首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2615篇
  免费   80篇
  国内免费   8篇
管理学   88篇
人口学   11篇
丛书文集   12篇
理论方法论   17篇
综合类   219篇
社会学   17篇
统计学   2339篇
  2024年   1篇
  2023年   30篇
  2022年   12篇
  2021年   30篇
  2020年   57篇
  2019年   91篇
  2018年   120篇
  2017年   181篇
  2016年   78篇
  2015年   75篇
  2014年   98篇
  2013年   892篇
  2012年   213篇
  2011年   66篇
  2010年   73篇
  2009年   72篇
  2008年   59篇
  2007年   59篇
  2006年   41篇
  2005年   60篇
  2004年   37篇
  2003年   39篇
  2002年   39篇
  2001年   29篇
  2000年   22篇
  1999年   29篇
  1998年   38篇
  1997年   27篇
  1996年   17篇
  1995年   15篇
  1994年   8篇
  1993年   8篇
  1992年   8篇
  1991年   8篇
  1990年   11篇
  1989年   4篇
  1988年   13篇
  1987年   3篇
  1986年   1篇
  1985年   9篇
  1984年   5篇
  1983年   11篇
  1982年   3篇
  1980年   3篇
  1979年   2篇
  1978年   1篇
  1977年   2篇
  1976年   1篇
  1975年   1篇
  1973年   1篇
排序方式: 共有2703条查询结果,搜索用时 15 毫秒
1.
In this paper, we consider the deterministic trend model where the error process is allowed to be weakly or strongly correlated and subject to non‐stationary volatility. Extant estimators of the trend coefficient are analysed. We find that under heteroskedasticity, the Cochrane–Orcutt‐type estimator (with some initial condition) could be less efficient than Ordinary Least Squares (OLS) when the process is highly persistent, whereas it is asymptotically equivalent to OLS when the process is less persistent. An efficient non‐parametrically weighted Cochrane–Orcutt‐type estimator is then proposed. The efficiency is uniform over weak or strong serial correlation and non‐stationary volatility of unknown form. The feasible estimator relies on non‐parametric estimation of the volatility function, and the asymptotic theory is provided. We use the data‐dependent smoothing bandwidth that can automatically adjust for the strength of non‐stationarity in volatilities. The implementation does not require pretesting persistence of the process or specification of non‐stationary volatility. Finite‐sample evaluation via simulations and an empirical application demonstrates the good performance of proposed estimators.  相似文献   
2.
3.
Proportional hazards are a common assumption when designing confirmatory clinical trials in oncology. This assumption not only affects the analysis part but also the sample size calculation. The presence of delayed effects causes a change in the hazard ratio while the trial is ongoing since at the beginning we do not observe any difference between treatment arms, and after some unknown time point, the differences between treatment arms will start to appear. Hence, the proportional hazards assumption no longer holds, and both sample size calculation and analysis methods to be used should be reconsidered. The weighted log‐rank test allows a weighting for early, middle, and late differences through the Fleming and Harrington class of weights and is proven to be more efficient when the proportional hazards assumption does not hold. The Fleming and Harrington class of weights, along with the estimated delay, can be incorporated into the sample size calculation in order to maintain the desired power once the treatment arm differences start to appear. In this article, we explore the impact of delayed effects in group sequential and adaptive group sequential designs and make an empirical evaluation in terms of power and type‐I error rate of the of the weighted log‐rank test in a simulated scenario with fixed values of the Fleming and Harrington class of weights. We also give some practical recommendations regarding which methodology should be used in the presence of delayed effects depending on certain characteristics of the trial.  相似文献   
4.
Simulation results are reported on methods that allow both within group and between group heteroscedasticity when testing the hypothesis that independent groups have identical regression parameters. The methods are based on a combination of extant techniques, but their finite-sample properties have not been studied. Included are results on the impact of removing all leverage points or just bad leverage points. The method used to identify leverage points can be important and can improve control over the Type I error probability. Results are illustrated using data from the Well Elderly II study.  相似文献   
5.
Bioequivalence (BE) studies are designed to show that two formulations of one drug are equivalent and they play an important role in drug development. When in a design stage, it is possible that there is a high degree of uncertainty on variability of the formulations and the actual performance of the test versus reference formulation. Therefore, an interim look may be desirable to stop the study if there is no chance of claiming BE at the end (futility), or claim BE if evidence is sufficient (efficacy), or adjust the sample size. Sequential design approaches specially for BE studies have been proposed previously in publications. We applied modification to the existing methods focusing on simplified multiplicity adjustment and futility stopping. We name our method modified sequential design for BE studies (MSDBE). Simulation results demonstrate comparable performance between MSDBE and the original published methods while MSDBE offers more transparency and better applicability. The R package MSDBE is available at https://sites.google.com/site/modsdbe/ . Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
6.
Abstract

The mean estimators with ratio depend on multiple auxiliary variables and unknown parameters in a finite population setting. We propose a new generalized approach with matrices for modeling the mutivariate mean estimators with two auxiliary variables. Our approach brings naturally a graphical analysis for comparing mean estimators.  相似文献   
7.
Mihyun Kim 《Statistics》2019,53(4):699-720
Functional principal component scores are commonly used to reduce mathematically infinitely dimensional functional data to finite dimensional vectors. In certain applications, most notably in finance, these scores exhibit tail behaviour consistent with the assumption of regular variation. Knowledge of the index of the regular variation, α, is needed to apply methods of extreme value theory. The most commonly used method of the estimation of α is the Hill estimator. We derive conditions under which the Hill estimator computed from the sample scores is consistent for the tail index of the unobservable population scores.  相似文献   
8.
Herein, we propose a data-driven test that assesses the lack of fit of nonlinear regression models. The comparison of local linear kernel and parametric fits is the basis of this test, and specific boundary-corrected kernels are not needed at the boundary when local linear fitting is used. Under the parametric null model, the asymptotically optimal bandwidth can be used for bandwidth selection. This selection method leads to the data-driven test that has a limiting normal distribution under the null hypothesis and is consistent against any fixed alternative. The finite-sample property of the proposed data-driven test is illustrated, and the power of the test is compared with that of some existing tests via simulation studies. We illustrate the practicality of the proposed test by using two data sets.  相似文献   
9.
Abstract.  Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce a curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.  相似文献   
10.
In this paper, we study the estimation of the minimum and maximum location parameters, respectively, representing the minimum guaranteed lifetime of series and parallel systems of components, within a general class of scale mixtures. The conditional or underlying distribution has only the primary restriction of being a location-scale family with positive support. The mixing distribution is also quite general in that we only assume that it has positive support and finite second moment. For demonstrative purposes several special cases are highlighted such as the gamma, inverse-Gaussian, and discrete mixture. Various estimators, including bootstrap bias corrected estimators, are compared with respect to both mean-squared-error and Pitman's measure of closeness.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号