首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   445篇
  免费   4篇
  国内免费   1篇
管理学   17篇
人口学   1篇
丛书文集   4篇
理论方法论   3篇
综合类   107篇
社会学   5篇
统计学   313篇
  2022年   4篇
  2021年   1篇
  2020年   4篇
  2019年   13篇
  2018年   9篇
  2017年   14篇
  2016年   13篇
  2015年   11篇
  2014年   15篇
  2013年   140篇
  2012年   30篇
  2011年   18篇
  2010年   9篇
  2009年   17篇
  2008年   12篇
  2007年   9篇
  2006年   10篇
  2005年   12篇
  2004年   9篇
  2003年   8篇
  2002年   9篇
  2001年   8篇
  2000年   4篇
  1999年   7篇
  1998年   6篇
  1997年   8篇
  1996年   6篇
  1995年   4篇
  1994年   6篇
  1993年   5篇
  1992年   6篇
  1990年   3篇
  1989年   4篇
  1988年   2篇
  1987年   1篇
  1985年   2篇
  1984年   3篇
  1983年   1篇
  1982年   4篇
  1979年   1篇
  1978年   1篇
  1976年   1篇
排序方式: 共有450条查询结果,搜索用时 0 毫秒
431.
In this paper we consider two-stage estimators of parameters of a structural equation in a model with recursive exclusion restrictions on the instrumental variables equations. The estimations considered are simple OLS and GLS estimators after substitution of estimates of the systematic part of the IV equations for the endogenous variables. It is known in the literature that neither imposing the restrictions in the first stage nor ignoring them will in general be more efficient than the alternative. We introduce a class of mixed instrumental variables estimators (MIV) with these possibilities as special cases which yields an estimator which is not only more efficient than the two stage estimators considered in the literature but as efficient as an efficient system estimator like 3SLS.  相似文献   
432.
There are a variety of methods in the literature which seek to make iterative estimation algorithms more manageable by breaking the iterations into a greater number of simpler or faster steps. Those algorithms which deal at each step with a proper subset of the parameters are called in this paper partitioned algorithms. Partitioned algorithms in effect replace the original estimation problem with a series of problems of lower dimension. The purpose of the paper is to characterize some of the circumstances under which this process of dimension reduction leads to significant benefits.Four types of partitioned algorithms are distinguished: reduced objective function methods, nested (partial Gauss-Seidel) iterations, zigzag (full Gauss-Seidel) iterations, and leapfrog (non-simultaneous) iterations. Emphasis is given to Newton-type methods using analytic derivatives, but a nested EM algorithm is also given. Nested Newton methods are shown to be equivalent to applying to same Newton method to the reduced objective function, and are applied to separable regression and generalized linear models. Nesting is shown generally to improve the convergence of Newton-type methods, both by improving the quadratic approximation to the log-likelihood and by improving the accuracy with which the observed information matrix can be approximated. Nesting is recommended whenever a subset of parameters is relatively easily estimated. The zigzag method is shown to produce a stable but generally slow iteration; it is fast and recommended when the parameter subsets have approximately uncorrelated estimates. The leapfrog iteration has less guaranteed properties in general, but is similar to nesting and zigzagging when the parameter subsets are orthogonal.  相似文献   
433.
The problem of constructing confidence limits for a scalar parameter is considered. Under weak conditions, Efron's accelerated bias-corrected bootstrap confidence limits are correct to second order in parametric familles. In this article, a new method, called the automatic percentile method, for setting approximate confidence limits is proposed as an attempt to alleviate two problems inherent in Efron's method. The accelerated bias-corrected method is not fully automatic, since it requires the calculation of an analytical adjustment; furthermore, it is typically not exact, though for many situations, particularly scalar-parameter familles, exact answers are available. In broader generality, the proposed method is exact when exact answers exist, and it is second-order accurate otherwise. The automatic percentile method is automatic, and for scalar parameter models it can be iterated to achieve higher accuracy, with the number of computations being linear in the number of iterations. However, when nuisance parameters are present, only second-order accuracy seems obtainable.  相似文献   
434.
Summary We consider the ideas of sufficiency and ancillarity for parametric models with nuisance parameters, and more generally Barndorff-Nielsen's notion of nonformation. The original four definitions of non-formation, namelyB-,S-,G- andM-nonformation, each cover different types of models. We stress the interpretation of nonformation in terms of the idea of perfect fit. This leads to a new definition of nonformation, calledI-nonformation, which is well suited for inference in exponential families. We also consider Rémon's concept ofL-sufficiency, and a recent extension toL-nonformation, due to Barndorff-Nielsen, which unifies and extendsB-,S- andG- nonformation. We study the relations between these six definitions, and show that they are all special cases ofM-nonformation. All animals are equal, but some animals are more equal than others. From ‘Animal Farm’, by G. Orwell (1945).  相似文献   
435.
The probability of tumor and hazard function are calculated in a stochastic two-stage model for carcinogenesis when the parameters of the mode are time-dependent. The method used is called the method of characteristics.  相似文献   
436.
We consider the likelihood ratio test (LRT) process related to the test of the absence of QTL (a QTL denotes a quantitative trait locus, i.e. a gene with quantitative effect on a trait) on the interval [0, T] representing a chromosome. The originality of this study is that we are under selective genotyping: only the individuals with extreme phenotypes are genotyped. We give the asymptotic distribution of this LRT process under the null hypothesis that there is no QTL on [0, T] and under local alternatives with a QTL at t on [0, T]. We show that the LRT process is asymptotically the square of a ‘non-linear interpolated and normalized Gaussian process’. We have an easy formula in order to compute the supremum of the square of this interpolated process. We prove that we have to genotype symmetrically and that the threshold is exactly the same as in the situation where all the individuals are genotyped.  相似文献   
437.
Supremum score test statistics are often used to evaluate hypotheses with unidentifiable nuisance parameters under the null hypothesis. Although these statistics provide an attractive framework to address non‐identifiability under the null hypothesis, little attention has been paid to their distributional properties in small to moderate sample size settings. In situations where there are identifiable nuisance parameters under the null hypothesis, these statistics may behave erratically in realistic samples as a result of a non‐negligible bias induced by substituting these nuisance parameters by their estimates under the null hypothesis. In this paper, we propose an adjustment to the supremum score statistics by subtracting the expected bias from the score processes and show that this adjustment does not alter the limiting null distribution of the supremum score statistics. Using a simple example from the class of zero‐inflated regression models for count data, we show empirically and theoretically that the adjusted tests are superior in terms of size and power. The practical utility of this methodology is illustrated using count data in HIV research.  相似文献   
438.
A result is presented concerning the null distribution of a statistic used to determine the number of multiplicative components in a fixed two-way model. This result suggests critical values which are compared with previously suggested critical values.  相似文献   
439.
A great deal of inference in statistics is based on making the approximation that a statistic is normally distributed. The error in doing so is generally O(n?1/2), where n is the sample size and can be considered when the distribution of the statistic is heavily biased or skewed. This note shows how one may reduce the error to O(n?(j+1)/2), where j is a given integer. The case considered is when the statistic is the mean of the sample values of a continuous distribution with a scale or location change after the sample has undergone an initial transformation, which may depend on an unknown parameter. The transformation corresponding to Fisher's score function yields an asymptotically efficient procedure.  相似文献   
440.
Statistical meta‐analysis is mostly carried out with the help of the random effect normal model, including the case of discrete random variables. We argue that the normal approximation is not always able to adequately capture the underlying uncertainty of the original discrete data. Furthermore, when we examine the influence of the prior distributions considered, in the presence of rare events, the results from this approximation can be very poor. In order to assess the robustness of the quantities of interest in meta‐analysis with respect to the choice of priors, this paper proposes an alternative Bayesian model for binomial random variables with several zero responses. Particular attention is paid to the coherence between the prior distributions of the study model parameters and the meta‐parameter. Thus, our method introduces a simple way to examine the sensitivity of these quantities to the structure dependence selected for study. For illustrative purposes, an example with real data is analysed, using the proposed Bayesian meta‐analysis model for binomial sparse data. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号