首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1501篇
  免费   27篇
  国内免费   5篇
管理学   38篇
人口学   8篇
丛书文集   45篇
理论方法论   17篇
综合类   348篇
社会学   11篇
统计学   1066篇
  2024年   1篇
  2023年   4篇
  2022年   2篇
  2021年   7篇
  2020年   18篇
  2019年   39篇
  2018年   50篇
  2017年   88篇
  2016年   22篇
  2015年   25篇
  2014年   65篇
  2013年   449篇
  2012年   120篇
  2011年   61篇
  2010年   73篇
  2009年   59篇
  2008年   57篇
  2007年   61篇
  2006年   48篇
  2005年   38篇
  2004年   32篇
  2003年   28篇
  2002年   26篇
  2001年   24篇
  2000年   15篇
  1999年   15篇
  1998年   9篇
  1997年   16篇
  1996年   9篇
  1995年   6篇
  1994年   7篇
  1993年   2篇
  1992年   5篇
  1991年   5篇
  1990年   6篇
  1989年   2篇
  1988年   1篇
  1987年   4篇
  1985年   2篇
  1984年   9篇
  1983年   6篇
  1982年   5篇
  1981年   5篇
  1980年   4篇
  1979年   1篇
  1978年   2篇
排序方式: 共有1533条查询结果,搜索用时 15 毫秒
81.
Summary.  The paper proposes two Bayesian approaches to non-parametric monotone function estimation. The first approach uses a hierarchical Bayes framework and a characterization of smooth monotone functions given by Ramsay that allows unconstrained estimation. The second approach uses a Bayesian regression spline model of Smith and Kohn with a mixture distribution of constrained normal distributions as the prior for the regression coefficients to ensure the monotonicity of the resulting function estimate. The small sample properties of the two function estimators across a range of functions are provided via simulation and compared with existing methods. Asymptotic results are also given that show that Bayesian methods provide consistent function estimators for a large class of smooth functions. An example is provided involving economic demand functions that illustrates the application of the constrained regression spline estimator in the context of a multiple-regression model where two functions are constrained to be monotone.  相似文献   
82.
Two-step estimation for inhomogeneous spatial point processes   总被引:1,自引:0,他引:1  
Summary.  The paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second-order properties ( K -function). Regression parameters are estimated by using a Poisson likelihood score estimating function and in the second step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rainforests.  相似文献   
83.
Huber's estimator has had a long lasting impact, particularly on robust statistics. It is well known that under certain conditions, Huber's estimator is asymptotically minimax. A moderate generalization in rederiving Huber's estimator shows that Huber's estimator is not the only choice. We develop an alternative asymptotic minimax estimator and name it regression with stochastically bounded noise (RSBN). Simulations demonstrate that RSBN is slightly better in performance, although it is unclear how to justify such an improvement theoretically. We propose two numerical solutions: an iterative numerical solution, which is extremely easy to implement and is based on the proximal point method; and a solution by applying state-of-the-art nonlinear optimization software packages, e.g., SNOPT. Contribution: the generalization of the variational approach is interesting and should be useful in deriving other asymptotic minimax estimators in other problems.  相似文献   
84.
In this paper, under a nonparametric regression model, we introduce two families of robust procedures to estimate the regression function when missing data occur in the response. The first proposal is based on a local MM-functional applied to the conditional distribution function estimate adapted to the presence of missing data. The second proposal imputes the missing responses using the local MM-smoother based on the observed sample and then estimates the regression function with the completed sample. We show that the robust procedures considered are consistent and asymptotically normally distributed. A robust procedure to select the smoothing parameter is also discussed.  相似文献   
85.
Doubly adaptive biased coin design (DBCD) is an important family of response-adaptive randomization procedures for clinical trials. It uses sequentially updated estimation to skew the allocation probability to favor the treatment that has performed better thus far. An important assumption for the DBCD is the homogeneity assumption for the patient responses. However, this assumption may be violated in many sequential experiments. Here we prove the robustness of the DBCD against certain time trends in patient responses. Strong consistency and asymptotic normality of the design are obtained under some widely satisfied conditions. Also, we propose a general weighted likelihood method to reduce the bias caused by the heterogeneity in the inference after a trial. Some numerical studies are also presented to illustrate the finite sample properties of DBCD.  相似文献   
86.
To enhance modeling flexibility, the authors propose a nonparametric hazard regression model, for which the ordinary and weighted least squares estimation and inference procedures are studied. The proposed model does not assume any parametric specifications on the covariate effects, which is suitable for exploring the nonlinear interactions between covariates, time and some exposure variable. The authors propose the local ordinary and weighted least squares estimators for the varying‐coefficient functions and establish the corresponding asymptotic normality properties. Simulation studies are conducted to empirically examine the finite‐sample performance of the new methods, and a real data example from a recent breast cancer study is used as an illustration. The Canadian Journal of Statistics 37: 659–674; 2009 © 2009 Statistical Society of Canada  相似文献   
87.
Most of the long memory estimators for stationary fractionally integrated time series models are known to experience non‐negligible bias in small and finite samples. Simple moment estimators are also vulnerable to such bias, but can easily be corrected. In this article, the authors propose bias reduction methods for a lag‐one sample autocorrelation‐based moment estimator. In order to reduce the bias of the moment estimator, the authors explicitly obtain the exact bias of lag‐one sample autocorrelation up to the order n−1. An example where the exact first‐order bias can be noticeably more accurate than its asymptotic counterpart, even for large samples, is presented. The authors show via a simulation study that the proposed methods are promising and effective in reducing the bias of the moment estimator with minimal variance inflation. The proposed methods are applied to the northern hemisphere data. The Canadian Journal of Statistics 37: 476–493; 2009 © 2009 Statistical Society of Canada  相似文献   
88.
The EM algorithm is a popular method for computing maximum likelihood estimates. One of its drawbacks is that it does not produce standard errors as a by-product. We consider obtaining standard errors by numerical differentiation. Two approaches are considered. The first differentiates the Fisher score vector to yield the Hessian of the log-likelihood. The second differentiates the EM operator and uses an identity that relates its derivative to the Hessian of the log-likelihood. The well-known SEM algorithm uses the second approach. We consider three additional algorithms: one that uses the first approach and two that use the second. We evaluate the complexity and precision of these three and the SEM in algorithm seven examples. The first is a single-parameter example used to give insight. The others are three examples in each of two areas of EM application: Poisson mixture models and the estimation of covariance from incomplete data. The examples show that there are algorithms that are much simpler and more accurate than the SEM algorithm. Hopefully their simplicity will increase the availability of standard error estimates in EM applications. It is shown that, as previously conjectured, a symmetry diagnostic can accurately estimate errors arising from numerical differentiation. Some issues related to the speed of the EM algorithm and algorithms that differentiate the EM operator are identified.  相似文献   
89.
A practical problem with large-scale survey data is the possible presence of overdispersion. It occurs when the data display more variability than is predicted by the variance–mean relationship. This article describes a probability distribution generated by a mixture of discrete random variables to capture uncertainty, feeling, and overdispersion. Specifically, several tests for detecting overdispersion will be implemented on the basis of the asymptotic theory for maximum likelihood estimators. We discuss the results of a simulation experiment concerning log-likelihood ratio, Wald, Score, and Profile tests. Finally, some real datasets are analyzed to illustrate the previous results.  相似文献   
90.
We estimate two well-known risk measures, the value-at-risk (VAR) and the expected shortfall, conditionally to a functional variable (i.e., a random variable valued in some semi(pseudo)-metric space). We use nonparametric kernel estimation for constructing estimators of these quantities, under general dependence conditions. Theoretical properties are stated whereas practical aspects are illustrated on simulated data: nonlinear functional and GARCH(1,1) models. Some ideas on bandwidth selection using bootstrap are introduced. Finally, an empirical example is given through data of the S&P 500 time series.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号