首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   844篇
  免费   10篇
管理学   11篇
丛书文集   1篇
综合类   14篇
社会学   3篇
统计学   825篇
  2024年   1篇
  2023年   1篇
  2022年   3篇
  2021年   4篇
  2020年   2篇
  2019年   17篇
  2018年   33篇
  2017年   72篇
  2016年   24篇
  2015年   17篇
  2014年   29篇
  2013年   361篇
  2012年   65篇
  2011年   35篇
  2010年   20篇
  2009年   28篇
  2008年   21篇
  2007年   15篇
  2006年   9篇
  2005年   12篇
  2004年   8篇
  2003年   5篇
  2002年   4篇
  2001年   6篇
  2000年   7篇
  1999年   10篇
  1998年   7篇
  1997年   7篇
  1996年   7篇
  1995年   1篇
  1994年   4篇
  1993年   1篇
  1991年   1篇
  1990年   1篇
  1989年   1篇
  1985年   2篇
  1984年   5篇
  1983年   3篇
  1982年   1篇
  1981年   1篇
  1980年   2篇
  1979年   1篇
排序方式: 共有854条查询结果,搜索用时 15 毫秒
131.

In this article, the validity of procedures for testing the significance of the slope in quantitative linear models with one explanatory variable and first-order autoregressive [AR(1)] errors is analyzed in a Monte Carlo study conducted in the time domain. Two cases are considered for the regressor: fixed and trended versus random and AR(1). In addition to the classical t -test using the Ordinary Least Squares (OLS) estimator of the slope and its standard error, we consider seven t -tests with n-2,hbox{df} built on the Generalized Least Squares (GLS) estimator or an estimated GLS estimator, three variants of the classical t -test with different variances of the OLS estimator, two asymptotic tests built on the Maximum Likelihood (ML) estimator, the F -test for fixed effects based on the Restricted Maximum Likelihood (REML) estimator in the mixed-model approach, two t -tests with n - 2 df based on first differences (FD) and first-difference ratios (FDR), and four modified t -tests using various corrections of the number of degrees of freedom. The FDR t -test, the REML F -test and the modified t -test using Dutilleul's effective sample size are the most valid among the testing procedures that do not assume the complete knowledge of the covariance matrix of the errors. However, modified t -tests are not applicable and the FDR t -test suffers from a lack of power when the regressor is fixed and trended ( i.e. , FDR is the same as FD in this case when observations are equally spaced), whereas the REML algorithm fails to converge at small sample sizes. The classical t -test is valid when the regressor is fixed and trended and autocorrelation among errors is predominantly negative, and when the regressor is random and AR(1), like the errors, and autocorrelation is moderately negative or positive. We discuss the results graphically, in terms of the circularity condition defined in repeated measures ANOVA and of the effective sample size used in correlation analysis with autocorrelated sample data. An example with environmental data is presented.  相似文献   
132.
The authors consider dimensionality reduction methods used for prediction, such as reduced rank regression, principal component regression and partial least squares. They show how it is possible to obtain intermediate solutions by estimating simultaneously the latent variables for the predictors and for the responses. They obtain a continuum of solutions that goes from reduced rank regression to principal component regression via maximum likelihood and least squares estimation. Different solutions are compared using simulated and real data.  相似文献   
133.
Logistic regression using conditional maximum likelihood estimation has recently gained widespread use. Many of the applications of logistic regression have been in situations in which the independent variables are collinear. It is shown that collinearity among the independent variables seriously effects the conditional maximum likelihood estimator in that the variance of this estimator is inflated in much the same way that collinearity inflates the variance of the least squares estimator in multiple regression. Drawing on the similarities between multiple and logistic regression several alternative estimators, which reduce the effect of the collinearity and are easy to obtain in practice, are suggested and compared in a simulation study.  相似文献   
134.
Identical numerical integration experiments are performed on a CYBER 205 and an IBM 3081 in order to gauge the relative performance of several methods of integration. The methods employed are the general methods of Gauss-Legendre, iterated Gauss-Legendre, Newton-Cotes, Romberg and Monte Carlo as well as three methods, due to Owen, Dutt, and Clark respectively, for integrating the normal density. The bi- and trivariate normal densities and four other functions are integrated; the latter four have integrals expressible in closed form and some of them can be parameterized to exhibit singularities or highly periodic behavior. The various Gauss-Legendre methods tend to be most accurate (when applied to the normal density they are even more accurate than the special purpose methods designed for the normal) and while they are not the fastest, they are at least competitive. In scalar mode the CYBER is about 2-6 times faster than the IBM 3081 and the speed advantage of vectorised to scalar mode ranges from 6 to 15. Large scale econometric problems of the probit type should now be routinely soluble.  相似文献   
135.
The skew t distribution is a flexible parametric family to fit data, because it includes parameters that let us regulate skewness and kurtosis. A problem with this distribution is that, for moderate sample sizes, the maximum likelihood estimator of the shape parameter is infinite with positive probability. In order to try to solve this problem, Sartori (2006) has proposed using a modified score function as an estimating equation for the shape parameter. In this note we prove that the resulting modified maximum likelihood estimator is always finite, considering the degrees of freedom as known and greater than or equal to 2.  相似文献   
136.
A study to investigate the human immunodeficiency virus (HIV) status on the course of neurological impairment, conducted by the HIV Center at Columbia University, followed a cohort of HIV positive and negative gay men for 5 years and assessed the presence or absence of neurological impairment every 6 months. Almost half of the subjects dropped out before the end of the study for reasons that might have been related to the missing neurological data. We propose likelihood-based methods for analysing such binary longitudinal data under informative and non-informative drop-out. A transition model is assumed for the binary response, and several models for the drop-out processes are considered which are functions of the response variable (neurological impairment). The likelihood ratio test is used to compare models with informative and non-informative drop-out mechanisms. Using simulations, we investigate the percentage bias and mean-squared error (MSE) of the parameter estimates in the transition model under various assumptions for the drop-out. We find evidence for informative drop-out in the study, and we illustrate that the bias and MSE for the parameters of the transition model are not directly related to the observed drop-out or missing data rates. The effect of HIV status on the neurological impairment is found to be statistically significant under each of the models considered for the drop-out, although the regression coefficient may be biased in certain cases. The presence and relative magnitude of the bias depend on factors such as the probability of drop-out conditional on the presence of neurological impairment and the prevalence of neurological impairment in the population under study.  相似文献   
137.
A problem arising from the study of the spread of a viral infection among potato plants by aphids appears to involve a mixture of two linear regressions on a single predictor variable. The plant scientists studying the problem were particularly interested in obtaining a 95% confidence upper bound for the infection rate. We discuss briefly the procedure for fitting mixtures of regression models by means of maximum likelihood, effected via the EM algorithm. We give general expressions for the implementation of the M-step and then address the issue of conducting statistical inference in this context. A technique due to T. A. Louis may be used to estimate the covariance matrix of the parameter estimates by calculating the observed Fisher information matrix. We develop general expressions for the entries of this information matrix. Having the complete covariance matrix permits the calculation of confidence and prediction bands for the fitted model. We also investigate the testing of hypotheses concerning the number of components in the mixture via parametric and 'semiparametric' bootstrapping. Finally, we develop a method of producing diagnostic plots of the residuals from a mixture of linear regressions.  相似文献   
138.
For asymptotic posterior normality in the one-parameter cases, Weng [2003. On Stein's identity for posterior normality. Statist. Sinica 13, 495–506] proposed to use a version of Stein's Identity to write the posterior expectations for functions of a normalized quantity in a form that is more transparent and can be easily analyzed. In the present paper we extend this approach to the multi-parameter cases and compare our conditions with earlier work. Three examples are used to illustrate the application of this method.  相似文献   
139.
In reliability analysis, accelerated life-testing allows for gradual increment of stress levels on test units during an experiment. In a special class of accelerated life tests known as step-stress tests, the stress levels increase discretely at pre-fixed time points, and this allows the experimenter to obtain information on the parameters of the lifetime distributions more quickly than under normal operating conditions. Moreover, when a test unit fails, there are often more than one fatal cause for the failure, such as mechanical or electrical. In this article, we consider the simple step-stress model under Type-II censoring when the lifetime distributions of the different risk factors are independently exponentially distributed. Under this setup, we derive the maximum likelihood estimators (MLEs) of the unknown mean parameters of the different causes under the assumption of a cumulative exposure model. The exact distributions of the MLEs of the parameters are then derived through the use of conditional moment generating functions. Using these exact distributions as well as the asymptotic distributions and the parametric bootstrap method, we discuss the construction of confidence intervals for the parameters and assess their performance through Monte Carlo simulations. Finally, we illustrate the methods of inference discussed here with an example.  相似文献   
140.
Summary.  Risk is at the centre of many policy decisions in companies, governments and other institutions. The risk of road fatalities concerns local governments in planning countermeasures, the risk and severity of counterparty default concerns bank risk managers daily and the risk of infection has actuarial and epidemiological consequences. However, risk cannot be observed directly and it usually varies over time. We introduce a general multivariate time series model for the analysis of risk based on latent processes for the exposure to an event, the risk of that event occurring and the severity of the event. Linear state space methods can be used for the statistical treatment of the model. The new framework is illustrated for time series of insurance claims, credit card purchases and road safety. It is shown that the general methodology can be effectively used in the assessment of risk.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号