首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   996篇
  免费   46篇
  国内免费   1篇
管理学   30篇
民族学   1篇
人口学   10篇
丛书文集   16篇
理论方法论   7篇
综合类   66篇
社会学   25篇
统计学   888篇
  2023年   7篇
  2022年   3篇
  2021年   14篇
  2020年   28篇
  2019年   42篇
  2018年   40篇
  2017年   56篇
  2016年   29篇
  2015年   32篇
  2014年   19篇
  2013年   346篇
  2012年   92篇
  2011年   30篇
  2010年   25篇
  2009年   29篇
  2008年   28篇
  2007年   27篇
  2006年   17篇
  2005年   27篇
  2004年   19篇
  2003年   14篇
  2002年   16篇
  2001年   18篇
  2000年   14篇
  1999年   6篇
  1998年   7篇
  1997年   4篇
  1996年   6篇
  1995年   2篇
  1994年   2篇
  1993年   5篇
  1992年   5篇
  1991年   4篇
  1990年   4篇
  1989年   5篇
  1988年   4篇
  1986年   1篇
  1985年   1篇
  1984年   3篇
  1983年   3篇
  1982年   1篇
  1981年   1篇
  1978年   1篇
  1977年   2篇
  1976年   1篇
  1975年   3篇
排序方式: 共有1043条查询结果,搜索用时 15 毫秒
71.
We propose a unified approach to the estimation of regression parameters under double-sampling designs, in which a primary sample consisting of data on the rough or proxy measures for the response and/or explanatory variables as well as a validation subsample consisting of data on the exact measurements are available. We assume that the validation sample is a simple random subsample from the primary sample. Our proposal utilizes a specific parametric model to extract the partial information contained in the primary sample. The resulting estimator is consistent even if such a model is misspecified, and it achieves higher asymptotic efficiency than the estimator based only on the validation data. Specific cases are discussed to illustrate the application of the estimator proposed.  相似文献   
72.
Bayesian Monte Carlo (BMC) decision analysis adopts a sampling procedure to estimate likelihoods and distributions of outcomes, and then uses that information to calculate the expected performance of alternative strategies, the value of information, and the value of including uncertainty. These decision analysis outputs are therefore subject to sample error. The standard error of each estimate and its bias, if any, can be estimated by the bootstrap procedure. The bootstrap operates by resampling (with replacement) from the original BMC sample, and redoing the decision analysis. Repeating this procedure yields a distribution of decision analysis outputs. The bootstrap approach to estimating the effect of sample error upon BMC analysis is illustrated with a simple value-of-information calculation along with an analysis of a proposed control structure for Lake Erie. The examples show that the outputs of BMC decision analysis can have high levels of sample error and bias.  相似文献   
73.
The Kulback-Leibler information has been considered for establishing goodness-of-fit test statistics, which have been shown to perform very well (Arizono & Ohta, 1989; Ebrahimi et al., 1992, etc). In this paper, we propose censored Kullback-Leibler information to generalize the discussion of the Kullback-Leibler information to the censored case. Then we establish a goodness-of-fit test statistic based on the censored Kullback-Leibler information with the type 2 censored data, and compare the test statistics with some existing test statistics for the exponential and normal distributions.  相似文献   
74.
The problem of estimating the sample size for a phase III trial on the basis of existing phase II data is considered, where data from phase II cannot be combined with those of the new phase III trial. Focus is on the test for comparing the means of two independent samples. A launching criterion is adopted in order to evaluate the relevance of phase II results: phase III is run if the effect size estimate is higher than a threshold of clinical importance. The variability in sample size estimation is taken into consideration. Then, the frequentist conservative strategies with a fixed amount of conservativeness and Bayesian strategies are compared. A new conservative strategy is introduced and is based on the calibration of the optimal amount of conservativeness – calibrated optimal strategy (COS). To evaluate the results we compute the Overall Power (OP) of the different strategies, as well as the mean and the MSE of sample size estimators. Bayesian strategies have poor characteristics since they show a very high mean and/or MSE of sample size estimators. COS clearly performs better than the other conservative strategies. Indeed, the OP of COS is, on average, the closest to the desired level; it is also the highest. COS sample size is also the closest to the ideal phase III sample size MI, showing averages and MSEs lower than those of the other strategies. Costs and experimental times are therefore considerably reduced and standardized. However, if the ideal sample size MI is to be estimated the phase II sample size n should be around the ideal phase III sample size, i.e. n?2MI/3. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
75.
The performance of Anderson's classification statistic based on a post-stratified random sample is examined. It is assumed that the training sample is a random sample from a stratified population consisting of two strata with unknown stratum weights. The sample is first segregated into the two strata by post-stratification. The unknown parameters for each of the two populations are then estimated and used in the construction of the plug-in discriminant. Under this procedure, it is shown that additional estimation of the stratum weight will not seriously affect the performance of Anderson's classification statistic. Furthermore, our discriminant enjoys a much higher efficiency than the procedure based on an unclassified sample from a mixture of normals investigated by Ganesalingam and McLachlan (1978).  相似文献   
76.
A unified approach is developed for testing hypotheses in the general linear model based on the ranks of the residuals. It complements the nonparametric estimation procedures recently reported in the literature. The testing and estimation procedures together provide a robust alternative to least squares. The methods are similar in spirit to least squares so that results are simple to interpret. Hypotheses concerning a subset of specified parameters can be tested, while the remaining parameters are treated as nuisance parameters. Asymptotically, the test statistic is shown to have a chi-square distribution under the null hypothesis. This result is then extended to cover a sequence of contiguous alternatives from which the Pitman efficacy is derived. The general application of the test requires the consistent estimation of a functional of the underlying distribution and one such estimate is furnished.  相似文献   
77.
In this paper, we have studied some implications between tail-ordering (also known as dispersive ordering) and failure rate ordering (also called TP2 ordering) of two probability distribution functions. Based on independent random samples from these distributions, a class of distribution-free tests has been proposed for testing the null hypothesis that the two life distributions are identical against the alternative that one failure rate is uniformly smaller than the other. The tests have good efficiencies as compared to their competitors.  相似文献   
78.
We study the effects of the inclusion of pairs of correlated observations in a sample on likelihood ratio tests for the difference in two means. In particular, we assess how the inclusion of correlated data pairs (e.g., such as data inadvertently collected from sib-pairs) affects the sample size requirements necessary for the implementation of a Likelihood Ratio (LR) test for the difference between two means. Our results suggest that correlated data pairs beneficially or adversely effect sample size requirements for an LR test to a degree functionally related to the mixture parameters dictating their relative frequencies in the larger sample on which the test will be performed, the strength of the correlation between the observations, and the size of imbalances in the sample with respect to the number of observations in each group. The relevance and implications of the results for genetic and epidemiologic research are discussed.  相似文献   
79.
基于时间序列分析方法的连续性抽样调查研究   总被引:1,自引:0,他引:1  
针对连续性抽样调查中如何利用过去各期的调查信息来提高现期抽样估计精度的问题,引入时间序列分析方法,分别考虑连续性抽样调查中重复样本和重叠样本等不同情况,建立了不同情况下的时间序列模型,利用成熟的时间序列分析方法给出了总体特征的线性组合估计量。由于时间序列分析方法能够充分利用以往各期的调查信息,从而能够给出精度更高的估计量。  相似文献   
80.
Two recursive schemes are presented for the calculation of the probabilityP(g(x)S n (x)≤h(x) for allx∈®), whereS n is the empirical distribution function of a sample from a continuous distribution andh, g are continuous and isotone functions. The results are specialized for the calculation of the distribution and the corresponding percentage points of the test statistic of the two-sided Kolmogorov-Smirnov one sample test. The schemes allow the calculation of the power of the test too. Finally an extensive tabulation of percentage points for the Kolmogorov-Smirnov test is given.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号