首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   309篇
  免费   7篇
管理学   43篇
人口学   41篇
丛书文集   4篇
理论方法论   30篇
综合类   6篇
社会学   98篇
统计学   94篇
  2023年   2篇
  2022年   2篇
  2020年   3篇
  2019年   8篇
  2018年   5篇
  2017年   19篇
  2016年   5篇
  2015年   4篇
  2014年   6篇
  2013年   60篇
  2012年   18篇
  2011年   10篇
  2010年   10篇
  2009年   13篇
  2008年   7篇
  2007年   7篇
  2006年   7篇
  2005年   7篇
  2004年   6篇
  2003年   5篇
  2002年   2篇
  2001年   2篇
  2000年   4篇
  1999年   7篇
  1998年   4篇
  1997年   5篇
  1996年   3篇
  1995年   2篇
  1994年   3篇
  1992年   2篇
  1991年   4篇
  1990年   3篇
  1988年   3篇
  1987年   2篇
  1986年   7篇
  1985年   6篇
  1984年   5篇
  1983年   4篇
  1982年   3篇
  1981年   9篇
  1980年   8篇
  1979年   2篇
  1978年   4篇
  1975年   4篇
  1971年   1篇
  1968年   1篇
  1967年   2篇
  1965年   2篇
  1964年   1篇
  1963年   1篇
排序方式: 共有316条查询结果,搜索用时 46 毫秒
51.
A periodically stationary time series has seasonal variances. A local linear trend estimation is proposed to accommodate unequal variances. A comparison of this proposed estimator with the estimator commonly used for a stationary time series is provided. The optimal bandwidth selection for this new trend estimator is discussed.  相似文献   
52.
The authors derive the analytic expressions for the mean and variance of the log-likelihood ratio for testing equality of k (k ≥ 2) normal populations, and suggest a chi-square approximation and a gamma approximation to the exact null distribution. Numerical comparisons show that the two approximations and the original beta approximation of Neyman and Pearson (1931 Neyman , J. , Pearson , E. S. ( 1931 ). On the problem of k samples . In: Neyman , J. , Pearson , E. S. , eds. Joint Statistical Papers . Cambridge : Cambridge University Press , pp. 116131 . [Google Scholar]) are all accurate, and the gamma approximation is the most accurate.  相似文献   
53.
This paper is concerned with classical statistical estimation of the reliability function for the exponential density with unknown mean failure time θ, and with a known and fixed mission time τ. The minimum variance unbiased (MVU) estimator and the maximum likelihood (ML) estimator are reviewed and their mean square errors compared for different sample sizes. These comparisons serve also to extend previous work, and reinforce further the nonexistence of a uniformly best estimator. A class of shrunken estimators is then defined, and it produces a shrunken quasi-estimator and a shrunken estimator. The mean square errors for both these estimators are compared to the mean square errors of the MVU and ML estimators, and the new estimators are found to perform very well. Unfortunately, these estimators are difficult to compute for practical applications. A second class of estimators, which is easy to compute is also developed. Its mean square error properties are compared to the other estimators, and it outperforms all the contending estimators over the high and low reliability parameter space. Since, for all the estimators, analytical mean square error comparisons are not tractable, extensive numerical analyses are done in obtaining both the exact small sample and large sample results.  相似文献   
54.
55.
56.
Longitudinal investigations play an increasingly prominent role in biomedical research. Much of the literature on specifying and fitting linear models for serial measurements uses methods based on the standard multivariate linear model. This article proposes a more flexible approach that permits specification of the expected response as an arbitrary linear function of fixed and time-varying covariates so that mean-value functions can be derived from subject matter considerations rather than methodological constraints. Three families of models for the covariance function are discussed: multivariate, autoregressive, and random effects. Illustrations demonstrate the flexibility and utility of the proposed approach to longitudinal analysis.  相似文献   
57.
This article concerns the construction of simple numerical illustrations of statistical techniques for use in introductory classes. Minimizing the amount of calculation facilitates checking, promotes reliability, is quicker, and reinforces the confidence of the student. Methods are described for generating (a) samples of size 3 upwards with integer-valued means and standard deviations, and (b) simple linear regressions with integer-valued intercepts and integer-valued or simple fractional slopes. Extensions to give exact pooled standard deviations in the two-sample problem and simple exact fractional correlation coefficients are also indicated. Further statistical procedures amenable to the same general approach are listed.  相似文献   
58.
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re‐analysis of data from a confirmatory clinical trial in depression. A likelihood‐based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug‐treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was ? 2.79 (p = .013). In placebo multiple imputation, the result was ? 2.17. Results from the other sensitivity analyses ranged from ? 2.21 to ? 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
59.
60.
In any crisis, there is a great deal of uncertainty, often geographical uncertainty or, more precisely, spatiotemporal uncertainty. Examples include the spread of contamination from an industrial accident, drifting volcanic ash, and the path of a hurricane. Estimating spatiotemporal probabilities is usually a difficult task, but that is not our primary concern. Rather, we ask how analysts can communicate spatiotemporal uncertainty to those handling the crisis. We comment on the somewhat limited literature on the representation of spatial uncertainty on maps. We note that many cognitive issues arise and that the potential for confusion is high. We note that in the early stages of handling a crisis, the uncertainties involved may be deep, i.e., difficult or impossible to quantify in the time available. In such circumstance, we suggest the idea of presenting multiple scenarios.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号