首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1339篇
  免费   58篇
  国内免费   5篇
管理学   62篇
民族学   2篇
人口学   6篇
丛书文集   25篇
理论方法论   20篇
综合类   306篇
社会学   34篇
统计学   947篇
  2024年   1篇
  2023年   25篇
  2022年   12篇
  2021年   19篇
  2020年   37篇
  2019年   64篇
  2018年   43篇
  2017年   75篇
  2016年   35篇
  2015年   43篇
  2014年   56篇
  2013年   284篇
  2012年   154篇
  2011年   42篇
  2010年   47篇
  2009年   46篇
  2008年   48篇
  2007年   49篇
  2006年   32篇
  2005年   52篇
  2004年   21篇
  2003年   36篇
  2002年   33篇
  2001年   25篇
  2000年   20篇
  1999年   13篇
  1998年   14篇
  1997年   9篇
  1996年   10篇
  1995年   12篇
  1994年   7篇
  1993年   5篇
  1992年   6篇
  1991年   7篇
  1990年   6篇
  1989年   3篇
  1988年   6篇
  1986年   1篇
  1985年   1篇
  1984年   3篇
排序方式: 共有1402条查询结果,搜索用时 46 毫秒
1.
We propose an approach to determine the distribution of particular linear combinations of hybrid censored order statistics which is based on the calculation of volumes of polytopes. For this purpose, we establish efficient and compact volume formulas in terms of B-splines. Further, we illustrate our approach for ten different progressive hybrid censoring schemes under an exponential assumption.  相似文献   
2.
An accurate procedure is proposed to calculate approximate moments of progressive order statistics in the context of statistical inference for lifetime models. The study analyses the performance of power series expansion to approximate the moments for location and scale distributions with high precision and smaller deviations with respect to the exact values. A comparative analysis between exact and approximate methods is shown using some tables and figures. The different approximations are applied in two situations. First, we consider the problem of computing the large sample variance–covariance matrix of maximum likelihood estimators. We also use the approximations to obtain progressively censored sampling plans for log-normal distributed data. These problems illustrate that the presented procedure is highly useful to compute the moments with precision for numerous censoring patterns and, in many cases, is the only valid method because the exact calculation may not be applicable.  相似文献   
3.
Benjamin Laumen 《Statistics》2019,53(3):569-600
In this paper, we revisit the progressive Type-I censoring scheme as it has originally been introduced by Cohen [Progressively censored samples in life testing. Technometrics. 1963;5(3):327–339]. In fact, original progressive Type-I censoring proceeds as progressive Type-II censoring but with fixed censoring times instead of failure time based censoring times. Apparently, a time truncation has been added to this censoring scheme by interpreting the final censoring time as a termination time. Therefore, not much work has been done on Cohens's original progressive censoring scheme with fixed censoring times. Thus, we discuss distributional results for this scheme and establish exact distributional results in likelihood inference for exponentially distributed lifetimes. In particular, we obtain the exact distribution of the maximum likelihood estimator (MLE). Further, the stochastic monotonicity of the MLE is verified in order to construct exact confidence intervals for both the scale parameter and the reliability.  相似文献   
4.
In this paper, we consider a statistical estimation problem known as atomic deconvolution. Introduced in reliability, this model has a direct application when considering biological data produced by flow cytometers. From a statistical point of view, we aim at inferring the percentage of cells expressing the selected molecule and the probability distribution function associated with its fluorescence emission. We propose here an adaptive estimation procedure based on a previous deconvolution procedure introduced by Es, Gugushvili, and Spreij [(2008), ‘Deconvolution for an atomic distribution’, Electronic Journal of Statistics, 2, 265–297] and Gugushvili, Es, and Spreij [(2011), ‘Deconvolution for an atomic distribution: rates of convergence’, Journal of Nonparametric Statistics, 23, 1003–1029]. For both estimating the mixing parameter and the mixing density automatically, we use the Lepskii method based on the optimal choice of a bandwidth using a bias-variance decomposition. We then derive some convergence rates that are shown to be minimax optimal (up to some log terms) in Sobolev classes. Finally, we apply our algorithm on the simulated and real biological data.  相似文献   
5.
Proportional hazards are a common assumption when designing confirmatory clinical trials in oncology. This assumption not only affects the analysis part but also the sample size calculation. The presence of delayed effects causes a change in the hazard ratio while the trial is ongoing since at the beginning we do not observe any difference between treatment arms, and after some unknown time point, the differences between treatment arms will start to appear. Hence, the proportional hazards assumption no longer holds, and both sample size calculation and analysis methods to be used should be reconsidered. The weighted log‐rank test allows a weighting for early, middle, and late differences through the Fleming and Harrington class of weights and is proven to be more efficient when the proportional hazards assumption does not hold. The Fleming and Harrington class of weights, along with the estimated delay, can be incorporated into the sample size calculation in order to maintain the desired power once the treatment arm differences start to appear. In this article, we explore the impact of delayed effects in group sequential and adaptive group sequential designs and make an empirical evaluation in terms of power and type‐I error rate of the of the weighted log‐rank test in a simulated scenario with fixed values of the Fleming and Harrington class of weights. We also give some practical recommendations regarding which methodology should be used in the presence of delayed effects depending on certain characteristics of the trial.  相似文献   
6.
The generalized half-normal (GHN) distribution and progressive type-II censoring are considered in this article for studying some statistical inferences of constant-stress accelerated life testing. The EM algorithm is considered to calculate the maximum likelihood estimates. Fisher information matrix is formed depending on the missing information law and it is utilized for structuring the asymptomatic confidence intervals. Further, interval estimation is discussed through bootstrap intervals. The Tierney and Kadane method, importance sampling procedure and Metropolis-Hastings algorithm are utilized to compute Bayesian estimates. Furthermore, predictive estimates for censored data and the related prediction intervals are obtained. We consider three optimality criteria to find out the optimal stress level. A real data set is used to illustrate the importance of GHN distribution as an alternative lifetime model for well-known distributions. Finally, a simulation study is provided with discussion.  相似文献   
7.
In this study, the E-Bayesian and hierarchical Bayesian of the scalar parameter of a Gompertz distribution under Type II censoring schemes were estimated based on fuzzy data under the squared error (SE) loss function and the efficiency of the proposed methods was compared with each other and with the Bayesian estimator using Monte Carlo simulation.  相似文献   
8.
Bioequivalence (BE) studies are designed to show that two formulations of one drug are equivalent and they play an important role in drug development. When in a design stage, it is possible that there is a high degree of uncertainty on variability of the formulations and the actual performance of the test versus reference formulation. Therefore, an interim look may be desirable to stop the study if there is no chance of claiming BE at the end (futility), or claim BE if evidence is sufficient (efficacy), or adjust the sample size. Sequential design approaches specially for BE studies have been proposed previously in publications. We applied modification to the existing methods focusing on simplified multiplicity adjustment and futility stopping. We name our method modified sequential design for BE studies (MSDBE). Simulation results demonstrate comparable performance between MSDBE and the original published methods while MSDBE offers more transparency and better applicability. The R package MSDBE is available at https://sites.google.com/site/modsdbe/ . Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
9.
该文以唯物史观为指导,阐明了马克思主义文化理论、中国现代文化的形成和中国先进文化的发展方向.作者认为文化是人类社会在经济、政治、精神上的反映,包括意识形态和其他一切精神活动及其产物,对经济、政治又具有强大的反作用.中国现代社会主义文化是由农业封建主义文化在近百年的经济、政治发展的基础上逐渐演变而成的,它的前进方向是面向现代化、面向世界、面向未来的民族的、科学的、大众的社会主义文化.  相似文献   
10.
Abstract.  Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce a curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号