首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   327篇
  免费   7篇
  国内免费   1篇
管理学   13篇
人口学   1篇
丛书文集   3篇
理论方法论   3篇
综合类   42篇
社会学   6篇
统计学   267篇
  2023年   2篇
  2022年   1篇
  2021年   1篇
  2020年   5篇
  2019年   10篇
  2018年   7篇
  2017年   9篇
  2016年   12篇
  2015年   11篇
  2014年   8篇
  2013年   48篇
  2012年   36篇
  2011年   6篇
  2010年   10篇
  2009年   15篇
  2008年   14篇
  2007年   13篇
  2006年   14篇
  2005年   14篇
  2004年   19篇
  2003年   8篇
  2002年   9篇
  2001年   12篇
  2000年   13篇
  1999年   9篇
  1998年   4篇
  1997年   8篇
  1996年   1篇
  1995年   5篇
  1993年   1篇
  1992年   3篇
  1989年   2篇
  1988年   1篇
  1987年   1篇
  1985年   2篇
  1982年   1篇
排序方式: 共有335条查询结果,搜索用时 250 毫秒
11.
Abstract.  Several testing procedures are proposed that can detect change-points in the error distribution of non-parametric regression models. Different settings are considered where the change-point either occurs at some time point or at some value of the covariate. Fixed as well as random covariates are considered. Weak convergence of the suggested difference of sequential empirical processes based on non-parametrically estimated residuals to a Gaussian process is proved under the null hypothesis of no change-point. In the case of testing for a change in the error distribution that occurs with increasing time in a model with random covariates the test statistic is asymptotically distribution free and the asymptotic quantiles can be used for the test. This special test statistic can also detect a change in the regression function. In all other cases the asymptotic distribution depends on unknown features of the data-generating process and a bootstrap procedure is proposed in these cases. The small sample performances of the proposed tests are investigated by means of a simulation study and the tests are applied to a data example.  相似文献   
12.
In a missing-data setting, we want to estimate the mean of a scalar outcome, based on a sample in which an explanatory variable is observed for every subject while responses are missing by happenstance for some of them. We consider two kinds of estimates of the mean response when the explanatory variable is functional. One is based on the average of the predicted values and the second one is a functional adaptation of the Horvitz–Thompson estimator. We show that the infinite dimensionality of the problem does not affect the rates of convergence by stating that the estimates are root-n consistent, under missing at random (MAR) assumption. These asymptotic features are completed by simulated experiments illustrating the easiness of implementation and the good behaviour on finite sample sizes of the method. This is the first paper emphasizing that the insensitiveness of averaged estimates, well known in multivariate non-parametric statistics, remains true for an infinite-dimensional covariable. In this sense, this work opens the way for various other results of this kind in functional data analysis.  相似文献   
13.
Abstract. Family‐based case–control designs are commonly used in epidemiological studies for evaluating the role of genetic susceptibility and environmental exposure to risk factors in the etiology of rare diseases. Within this framework, it is often reasonable to assume genetic susceptibility and environmental exposure being conditionally independent of each other within families in the source population. We focus on this setting to explore the situation of measurement error affecting the assessment of the environmental exposure. We correct for measurement error through a likelihood‐based method. We exploit a conditional likelihood approach to relate the probability of disease to the genetic and the environmental risk factors. We show that this approach provides less biased and more efficient results than that based on logistic regression. Regression calibration, instead, provides severely biased estimators of the parameters. The comparison of the correction methods is performed through simulation, under common measurement error structures.  相似文献   
14.
It is well-known that the nonparametric maximum likelihood estimator (NPMLE) of a survival function may severely underestimate the survival probabilities at very early times for left truncated data. This problem might be overcome by instead computing a smoothed nonparametric estimator (SNE) via the EMS algorithm. The close connection between the SNE and the maximum penalized likelihood estimator is also established. Extensive Monte Carlo simulations demonstrate the superior performance of the SNE over that of the NPMLE, in terms of either bias or variance, even for moderately large Samples. The methodology is illustrated with an application to the Massachusetts Health Care Panel Study dataset to estimate the probability of being functionally independent for non-poor male and female groups rcspectively.  相似文献   
15.
In this paper, an alternative method for the comparison of two diagnostic systems based on receiver operating characteristic (ROC) curves is presented. ROC curve analysis is often used as a statistical tool for the evaluation of diagnostic systems. However, in general, the comparison of ROC curves is not straightforward, in particular, when they cross each other. A similar difficulty is also observed in the multi-objective optimization field where sets of solutions defining fronts must be compared with a multi-dimensional space. Thus, the proposed methodology is based on a procedure used to compare the performance of distinct multi-objective optimization algorithms. In general, methods based on the area under the ROC curves are not sensitive to the existence of crossing points between the curves. The new approach can deal with this situation and also allows the comparison of partial portions of ROC curves according to particular values of sensitivity and specificity of practical interest. Simulations results are presented. For illustration purposes, considering real data from newborns with very low birthweight, the new method was applied in order to discriminate the better index for evaluating the risk of death.  相似文献   
16.
Zusammenfassung: In dieser Studie wird ein Konzept zur Kumulation von laufenden Haushaltsbudgetbefragungen im Rahmen des Projektes Amtliche Statistik und sozioökonomische Fragestellungen entwickelt und zur Diskussion gestellt. Dafür werden die theoretischen Grundlagen und Bausteine gelegt und die zentrale Aufgabe einer strukturellen demographischen Gewichtung mit einem Hochrechnungs–/Kalibrierungsansatz auf informationstheoretischer Basis gelöst.Vor dem Hintergrund der Wirtschaftsrechnungen des Statistischen Bundesamtes (Lfd. Wirtschaftsrechnungen und EVS) wird darauf aufbauend ein konkretes Konzept für die Kumulation von jährlichen Haushaltsbudgetbefragungen vorgeschlagen. Damit kann das Ziel einer Kumulation von Querschnitten mit einer umfassenderen Kumulationsstichprobe für tief gegliederte Analysen erreicht werden. Folgen sollen die Simulationsrechnungen zur Evaluation des Konzepts.
Summary: In this study a concept for cumulating periodic household surveys within the frame of the project Official Statistics and Socio–Economic Questions is developed and asks for discussion. We develop the theoretical background and solve the central task of a structural demographic weighting/calibration based on an information theoretical approach.Based on the household budget surveys of the Federal Statistical Office (Periodic Household Budget Surveys and Income and Consumption Sample (EVS)) a practical concept is proposed to cumulate yearly household surveys. This allows a cumulation of cross–sections by a comprehensive cumulated sample for deeply structured analyses. In a following study this concept shall be evaluated.
  相似文献   
17.
We propose kernel density estimators based on prebinned data. We use generalized binning schemes based on the quantiles points of a certain auxiliary distribution function. Therein the uniform distribution corresponds to usual binning. The statistical accuracy of the resulting kernel estimators is studied, i.e. we derive mean squared error results for the closeness of these estimators to both the true function and the kernel estimator based on the original data set. Our results show the influence of the choice of the auxiliary density on the binned kernel estimators and they reveal that non-uniform binning can be worthwhile.  相似文献   
18.
Non-parametric Estimation of the Residual Distribution   总被引:2,自引:0,他引:2  
Consider a heteroscedastic regression model Y = m ( X ) +σ( X )ε, where the functions m and σ are "smooth", and ε is independent of X . An estimator of the distribution of ε based on non-parametric regression residuals is proposed and its weak convergence is obtained. Applications to prediction intervals and goodness-of-fit tests are discussed.  相似文献   
19.
The problem of estimating the sample size for a phase III trial on the basis of existing phase II data is considered, where data from phase II cannot be combined with those of the new phase III trial. Focus is on the test for comparing the means of two independent samples. A launching criterion is adopted in order to evaluate the relevance of phase II results: phase III is run if the effect size estimate is higher than a threshold of clinical importance. The variability in sample size estimation is taken into consideration. Then, the frequentist conservative strategies with a fixed amount of conservativeness and Bayesian strategies are compared. A new conservative strategy is introduced and is based on the calibration of the optimal amount of conservativeness – calibrated optimal strategy (COS). To evaluate the results we compute the Overall Power (OP) of the different strategies, as well as the mean and the MSE of sample size estimators. Bayesian strategies have poor characteristics since they show a very high mean and/or MSE of sample size estimators. COS clearly performs better than the other conservative strategies. Indeed, the OP of COS is, on average, the closest to the desired level; it is also the highest. COS sample size is also the closest to the ideal phase III sample size MI, showing averages and MSEs lower than those of the other strategies. Costs and experimental times are therefore considerably reduced and standardized. However, if the ideal sample size MI is to be estimated the phase II sample size n should be around the ideal phase III sample size, i.e. n?2MI/3. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
20.
Abstract.  Several classical time series models can be written as a regression model between the components of a strictly stationary bivariate process. Some of those models, such as the ARCH models, share the property of proportionality of the regression function and the scale function, which is an interesting feature in econometric and financial models. In this article, we present a procedure to test for this feature in a non-parametric context. The test is based on the difference between two non-parametric estimators of the distribution of the regression error. Asymptotic results are proved and some simulations are shown in the paper in order to illustrate the finite sample properties of the procedure.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号