首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1636篇
  免费   46篇
  国内免费   2篇
管理学   58篇
人口学   1篇
丛书文集   21篇
理论方法论   9篇
综合类   163篇
社会学   24篇
统计学   1408篇
  2023年   12篇
  2022年   13篇
  2021年   15篇
  2020年   17篇
  2019年   56篇
  2018年   64篇
  2017年   100篇
  2016年   42篇
  2015年   23篇
  2014年   51篇
  2013年   416篇
  2012年   115篇
  2011年   63篇
  2010年   46篇
  2009年   58篇
  2008年   58篇
  2007年   55篇
  2006年   49篇
  2005年   59篇
  2004年   51篇
  2003年   42篇
  2002年   34篇
  2001年   34篇
  2000年   32篇
  1999年   22篇
  1998年   21篇
  1997年   24篇
  1996年   11篇
  1995年   14篇
  1994年   12篇
  1993年   6篇
  1992年   8篇
  1991年   10篇
  1990年   3篇
  1989年   1篇
  1988年   5篇
  1987年   6篇
  1986年   2篇
  1985年   4篇
  1984年   6篇
  1983年   8篇
  1982年   5篇
  1981年   3篇
  1980年   4篇
  1979年   2篇
  1978年   2篇
排序方式: 共有1684条查询结果,搜索用时 859 毫秒
101.
The paper derives Bartlett corrections for improving the chisquare approximation to the likelihood ratio statistics in a class of location-scale family of distributions, which encompasses the elliptical family of distributions and also asymmetric distributions such as the extreme value distributions. We present, in matrix notation, a Bartlett corrected likelihood ratio statistic for testing that a subset of the nonlinear regression coefficients in this class of models equals a given vector of constants. The formulae derived are simple enough to be used analytically to obtain several Bartlett corrections in a variety of important models. We show that these formulae generalize a number of previously published results. We also present simulation results comparing the sizes and powers of the usual likelihood ratio tests and their Bartlett corrected versions when the scale parameter is considered known and when this parameter is uncorrectly specified.  相似文献   
102.
This paper extends stochastic conditional duration (SCD) models for financial transaction data to allow for correlation between error processes and innovations of observed duration process and latent log duration process. Suitable algorithms of Markov Chain Monte Carlo (MCMC) are developed to fit the resulting SCD models under various distributional assumptions about the innovation of the measurement equation. Unlike the estimation methods commonly used to estimate the SCD models in the literature, we work with the original specification of the model, without subjecting the observation equation to a logarithmic transformation. Results of simulation studies suggest that our proposed models and corresponding estimation methodology perform quite well. We also apply an auxiliary particle filter technique to construct one-step-ahead in-sample and out-of-sample duration forecasts of the fitted models. Applications to the IBM transaction data allow comparison of our models and methods to those existing in the literature.  相似文献   
103.
We consider the issue of sampling from the posterior distribution of exponential random graph (ERG) models and other statistical models with intractable normalizing constants. Existing methods based on exact sampling are either infeasible or require very long computing time. We study a class of approximate Markov chain Monte Carlo (MCMC) sampling schemes that deal with this issue. We also develop a new Metropolis–Hastings kernel to sample sparse large networks from ERG models. We illustrate the proposed methods on several examples.  相似文献   
104.
Abstract

This article introduces a parametric robust way of comparing two population means and two population variances. With large samples the comparison of two means, under model misspecification, is lesser a problem, for, the validity of inference is protected by the central limit theorem. However, the assumption of normality is generally required, so that the inference for the ratio of two variances can be carried out by the familiar F statistic. A parametric robust approach that is insensitive to the distributional assumption will be proposed here. More specifically, it will be demonstrated that the normal likelihood function can be adjusted for asymptotically valid inferences for all underlying distributions with finite fourth moments. The normal likelihood function, on the other hand, is itself robust for the comparison of two means so that no adjustment is needed.  相似文献   
105.
This paper considers a likelihood ratio test for testing hypotheses defined by non-oblique closed convex cones, satisfying the so called iteration projection property, in a set of k normal means. We obtain the critical values of the test using the Chi-Bar-Squared distribution. The obtuse cones are introduced as a particular class of cones which are non-oblique with every one of their faces. Examples with the simple tree order cone and the total order cone are given to illustrate the results.  相似文献   
106.
This paper considers a class of densities formed by taking the product of nonnegative polynomials and normal densities. These densities provide a rich class of distributions that can be used in modelling when faced with non-normal characteristics such as skewness and multimodality. In this paper we address inferential and computational issues arising in the practical implementation of this parametric family in the context of the linear model. Exact results are recorded for the conditional analysis of location-scale models and an importance sampling algorithm is developed for the implementation of a conditional analysis for the general linear model when using polynomial-normal distributions for the error.  相似文献   
107.
In this article, the Bayesian analysis of the regression model with errors terms generated by a first-order autoregressive model is considered. Our aim is to study the effect of two kinds of contamination of this model via the posterior distribution of the regression parameter.  相似文献   
108.
Over the past decades, various principles for causal effect estimation have been proposed, all differing in terms of how they adjust for measured confounders: either via traditional regression adjustment, by adjusting for the expected exposure given those confounders (e.g., the propensity score), or by inversely weighting each subject's data by the likelihood of the observed exposure, given those confounders. When the exposure is measured with error, this raises the question whether these different estimation strategies might be differently affected and whether one of them is to be preferred for that reason. In this article, we investigate this by comparing inverse probability of treatment weighted (IPTW) estimators and doubly robust estimators for the exposure effect in linear marginal structural mean models (MSM) with G-estimators, propensity score (PS) adjusted estimators and ordinary least squares (OLS) estimators for the exposure effect in linear regression models. We find analytically that these estimators are equally affected when exposure misclassification is independent of the confounders, but not otherwise. Simulation studies reveal similar results for time-varying exposures and when the model of interest includes a logistic link.  相似文献   
109.
We develop exact inference for the location and scale parameters of the Laplace (double exponential) distribution based on their maximum likelihood estimators from a Type-II censored sample. Based on some pivotal quantities, exact confidence intervals and tests of hypotheses are constructed. Upon conditioning first on the number of observations that are below the population median, exact distributions of the pivotal quantities are expressed as mixtures of linear combinations and of ratios of linear combinations of standard exponential random variables, which facilitates the computation of quantiles of these pivotal quantities. Tables of quantiles are presented for the complete sample case.  相似文献   
110.
Event counts are response variables with non-negative integer values representing the number of times that an event occurs within a fixed domain such as a time interval, a geographical area or a cell of a contingency table. Analysis of counts by Gaussian regression models ignores the discreteness, asymmetry and heteroscedasticity and is inefficient, providing unrealistic standard errors or possibly negative predictions of the expected number of events. The Poisson regression is the standard model for count data with underlying assumptions on the generating process which may be implausible in many applications. Statisticians have long recognized the limitation of imposing equidispersion under the Poisson regression model. A typical situation is when the conditional variance exceeds the conditional mean, in which case models allowing for overdispersion are routinely used. Less reported is the case of underdispersion with fewer modeling alternatives and assessments available in the literature. One of such alternatives, the Gamma-count model, is adopted here in the analysis of an agronomic experiment designed to investigate the effect of levels of defoliation on different phenological states upon the number of cotton bolls. Data set and code for analysis are available as online supplements. Results show improvements over the Poisson model and the semi-parametric quasi-Poisson model in capturing the observed variability in the data. Estimating rather than assuming the underlying variance process leads to important insights into the process.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号