首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1255篇
  免费   32篇
  国内免费   6篇
管理学   119篇
人口学   7篇
丛书文集   6篇
理论方法论   13篇
综合类   180篇
社会学   3篇
统计学   965篇
  2023年   8篇
  2022年   9篇
  2021年   10篇
  2020年   23篇
  2019年   35篇
  2018年   43篇
  2017年   99篇
  2016年   25篇
  2015年   33篇
  2014年   47篇
  2013年   319篇
  2012年   100篇
  2011年   29篇
  2010年   35篇
  2009年   32篇
  2008年   37篇
  2007年   31篇
  2006年   25篇
  2005年   47篇
  2004年   24篇
  2003年   21篇
  2002年   31篇
  2001年   33篇
  2000年   13篇
  1999年   28篇
  1998年   24篇
  1997年   27篇
  1996年   11篇
  1995年   13篇
  1994年   10篇
  1993年   14篇
  1992年   19篇
  1991年   8篇
  1990年   3篇
  1989年   2篇
  1988年   5篇
  1987年   3篇
  1986年   2篇
  1985年   2篇
  1984年   2篇
  1983年   4篇
  1981年   4篇
  1980年   1篇
  1979年   1篇
  1977年   1篇
排序方式: 共有1293条查询结果,搜索用时 15 毫秒
1.
2.
Proportional hazards are a common assumption when designing confirmatory clinical trials in oncology. This assumption not only affects the analysis part but also the sample size calculation. The presence of delayed effects causes a change in the hazard ratio while the trial is ongoing since at the beginning we do not observe any difference between treatment arms, and after some unknown time point, the differences between treatment arms will start to appear. Hence, the proportional hazards assumption no longer holds, and both sample size calculation and analysis methods to be used should be reconsidered. The weighted log‐rank test allows a weighting for early, middle, and late differences through the Fleming and Harrington class of weights and is proven to be more efficient when the proportional hazards assumption does not hold. The Fleming and Harrington class of weights, along with the estimated delay, can be incorporated into the sample size calculation in order to maintain the desired power once the treatment arm differences start to appear. In this article, we explore the impact of delayed effects in group sequential and adaptive group sequential designs and make an empirical evaluation in terms of power and type‐I error rate of the of the weighted log‐rank test in a simulated scenario with fixed values of the Fleming and Harrington class of weights. We also give some practical recommendations regarding which methodology should be used in the presence of delayed effects depending on certain characteristics of the trial.  相似文献   
3.
While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker.  相似文献   
4.
Summary.  Generalized linear latent variable models (GLLVMs), as defined by Bartholomew and Knott, enable modelling of relationships between manifest and latent variables. They extend structural equation modelling techniques, which are powerful tools in the social sciences. However, because of the complexity of the log-likelihood function of a GLLVM, an approximation such as numerical integration must be used for inference. This can limit drastically the number of variables in the model and can lead to biased estimators. We propose a new estimator for the parameters of a GLLVM, based on a Laplace approximation to the likelihood function and which can be computed even for models with a large number of variables. The new estimator can be viewed as an M -estimator, leading to readily available asymptotic properties and correct inference. A simulation study shows its excellent finite sample properties, in particular when compared with a well-established approach such as LISREL. A real data example on the measurement of wealth for the computation of multidimensional inequality is analysed to highlight the importance of the methodology.  相似文献   
5.
首先分析了对数最小二乘排序法的特点,说明它是一种值得重视的好方法;并进一步阐述了这一方法的基本原理,着重地对群体判断下求加权的综合排序向量的方法进行了严密的数学推导;提出了在加权的综合排序中权重系数确定的新見解,并举例子以解释。  相似文献   
6.
研究了以扩充Jacobi多项式(1+x)Vn(x)的零点为基点的Lagrange插值多项式Ln(f,x)逼近/k)的一些问题.  相似文献   
7.
WEIGHTED SUMS OF NEGATIVELY ASSOCIATED RANDOM VARIABLES   总被引:2,自引:0,他引:2  
In this paper, we establish strong laws for weighted sums of negatively associated (NA) random variables which have a higher‐order moment condition. Some results of Bai Z.D. & Cheng P.E. (2000) [Marcinkiewicz strong laws for linear statistics. Statist. and Probab. Lett. 43, 105–112,] and Sung S.K. (2001) [Strong laws for weighted sums of i.i.d. random variables, Statist. and Probab. Lett. 52, 413–419] are sharpened and extended from the independent identically distributed case to the NA setting. Also, one of the results of Li D.L. et al. (1995) [Complete convergence and almost sure convergence of weighted sums of random variables. J. Theoret. Probab. 8, 49–76,] is complemented and extended.  相似文献   
8.
该文给出了一类多元Gauss-Weierstrass算子线性组合加Jacobi权在一致逼近下的正、逆定理和逼近阶的特征刻划  相似文献   
9.
The standard hypothesis testing procedure in meta-analysis (or multi-center clinical trials) in the absence of treatment-by-center interaction relies on approximating the null distribution of the standard test statistic by a standard normal distribution. For relatively small sample sizes, the standard procedure has been shown by various authors to have poor control of the type I error probability, leading to too many liberal decisions. In this article, two test procedures are proposed, which rely on thet—distribution as the reference distribution. A simulation study indicates that the proposed procedures attain significance levels closer to the nominal level compared with the standard procedure.  相似文献   
10.
证明了Cusich提出的猜想 (I) .对于任给的n个正整数a1,a2 ,… ,an 总存在一个实数x ,使得‖aix‖ 1n+ 1,i=1,2 ,…n成立 .其中‖x‖表示x到其最近整数的距离  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号