首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8021篇
  免费   141篇
  国内免费   39篇
管理学   378篇
民族学   31篇
人口学   145篇
丛书文集   271篇
理论方法论   132篇
综合类   2026篇
社会学   99篇
统计学   5119篇
  2024年   8篇
  2023年   37篇
  2022年   70篇
  2021年   73篇
  2020年   136篇
  2019年   226篇
  2018年   249篇
  2017年   486篇
  2016年   173篇
  2015年   198篇
  2014年   279篇
  2013年   2169篇
  2012年   597篇
  2011年   332篇
  2010年   239篇
  2009年   278篇
  2008年   281篇
  2007年   293篇
  2006年   260篇
  2005年   267篇
  2004年   215篇
  2003年   217篇
  2002年   194篇
  2001年   176篇
  2000年   144篇
  1999年   84篇
  1998年   74篇
  1997年   61篇
  1996年   46篇
  1995年   33篇
  1994年   27篇
  1993年   34篇
  1992年   31篇
  1991年   25篇
  1990年   28篇
  1989年   24篇
  1988年   18篇
  1987年   22篇
  1986年   10篇
  1985年   11篇
  1984年   15篇
  1983年   17篇
  1982年   10篇
  1981年   5篇
  1980年   4篇
  1979年   6篇
  1978年   6篇
  1977年   6篇
  1976年   3篇
  1975年   4篇
排序方式: 共有8201条查询结果,搜索用时 9 毫秒
81.
We define the odd log-logistic exponential Gaussian regression with two systematic components, which extends the heteroscedastic Gaussian regression and it is suitable for bimodal data quite common in the agriculture area. We estimate the parameters by the method of maximum likelihood. Some simulations indicate that the maximum-likelihood estimators are accurate. The model assumptions are checked through case deletion and quantile residuals. The usefulness of the new regression model is illustrated by means of three real data sets in different areas of agriculture, where the data present bimodality.  相似文献   
82.
It is well-known that classical Tobit estimator of the parameters of the censored regression (CR) model is inefficient in case of non-normal error terms. In this paper, we propose to use the modified maximum likelihood (MML) estimator under the Jones and Faddy''s skew t-error distribution, which covers a wide range of skew and symmetric distributions, for the CR model. The MML estimators, providing an alternative to the Tobit estimator, are explicitly expressed and they are asymptotically equivalent to the maximum likelihood estimator. A simulation study is conducted to compare the efficiencies of the MML estimators with the classical estimators such as the ordinary least squares, Tobit, censored least absolute deviations and symmetrically trimmed least squares estimators. The results of the simulation study show that the MML estimators work well among the others with respect to the root mean square error criterion for the CR model. A real life example is also provided to show the suitability of the MML methodology.  相似文献   
83.
Extreme Value Theory (EVT) aims to study the tails of probability distributions in order to measure and quantify extreme events of maximum and minimum. In river flow data, an extreme level of a river may be related to the level of a neighboring river that flows into it. In this type of data, it is very common for flooding of a location to have been caused by a very large flow from an affluent river that is tens or hundreds of kilometers from this location. In this sense, an interesting approach is to consider a conditional model for the estimation of a multivariate model. Inspired by this idea, we propose a Bayesian model to describe the dependence of exceedance between rivers, where we considered a conditionally independent structure. In this model, the dependence between rivers is captured by modeling the excess marginally of one river as a consequence of linear functions of the other rivers. The results showed that there is a strong and positive connection between excesses in one river caused by the excesses of the other rivers.  相似文献   
84.
85.
This paper proposes a new hysteretic vector autoregressive (HVAR) model in which the regime switching may be delayed when the hysteresis variable lies in a hysteresis zone. We integrate an adapted multivariate Student-t distribution from amending the scale mixtures of normal distributions. This HVAR model allows for a higher degree of flexibility in the degrees of freedom for each time series. We use the proposed model to test for a causal relationship between any two target time series. Using posterior odds ratios, we overcome the limitations of the classical approach to multiple testing. Both simulated and real examples herein help illustrate the suggested methods. We apply the proposed HVAR model to investigate the causal relationship between the quarterly growth rates of gross domestic product of United Kingdom and United States. Moreover, we check the pairwise lagged dependence of daily PM2.5 levels in three districts of Taipei.  相似文献   
86.
This paper proposes the use of the Bernstein–Dirichlet process prior for a new nonparametric approach to estimating the link function in the single-index model (SIM). The Bernstein–Dirichlet process prior has so far mainly been used for nonparametric density estimation. Here we modify this approach to allow for an approximation of the unknown link function. Instead of the usual Gaussian distribution, the error term is assumed to be asymmetric Laplace distributed which increases the flexibility and robustness of the SIM. To automatically identify truly active predictors, spike-and-slab priors are used for Bayesian variable selection. Posterior computations are performed via a Metropolis-Hastings-within-Gibbs sampler using a truncation-based algorithm for stick-breaking priors. We compare the efficiency of the proposed approach with well-established techniques in an extensive simulation study and illustrate its practical performance by an application to nonparametric modelling of the power consumption in a sewage treatment plant.  相似文献   
87.
An accurate procedure is proposed to calculate approximate moments of progressive order statistics in the context of statistical inference for lifetime models. The study analyses the performance of power series expansion to approximate the moments for location and scale distributions with high precision and smaller deviations with respect to the exact values. A comparative analysis between exact and approximate methods is shown using some tables and figures. The different approximations are applied in two situations. First, we consider the problem of computing the large sample variance–covariance matrix of maximum likelihood estimators. We also use the approximations to obtain progressively censored sampling plans for log-normal distributed data. These problems illustrate that the presented procedure is highly useful to compute the moments with precision for numerous censoring patterns and, in many cases, is the only valid method because the exact calculation may not be applicable.  相似文献   
88.
In many practical applications, high-dimensional regression analyses have to take into account measurement error in the covariates. It is thus necessary to extend regularization methods, that can handle the situation where the number of covariates p largely exceed the sample size n, to the case in which covariates are also mismeasured. A variety of methods are available in this context, but many of them rely on knowledge about the measurement error and the structure of its covariance matrix. In this paper, we set the goal to compare some of these methods, focusing on situations relevant for practical applications. In particular, we will evaluate these methods in setups in which the measurement error distribution and dependence structure are not known and have to be estimated from data. Our focus is on variable selection, and the evaluation is based on extensive simulations.  相似文献   
89.
The central limit theorem indicates that when the sample size goes to infinite, the sampling distribution of means tends to follow a normal distribution; it is the basis for the most usual confidence interval and sample size formulas. This study analyzes what sample size is large enough to assume that the distribution of the estimator of a proportion follows a Normal distribution. Also, we propose the use of a correction factor in sample size formulas to ensure a confidence level even when the central limit theorem does not apply for these distributions.  相似文献   
90.
Benjamin Laumen 《Statistics》2019,53(3):569-600
In this paper, we revisit the progressive Type-I censoring scheme as it has originally been introduced by Cohen [Progressively censored samples in life testing. Technometrics. 1963;5(3):327–339]. In fact, original progressive Type-I censoring proceeds as progressive Type-II censoring but with fixed censoring times instead of failure time based censoring times. Apparently, a time truncation has been added to this censoring scheme by interpreting the final censoring time as a termination time. Therefore, not much work has been done on Cohens's original progressive censoring scheme with fixed censoring times. Thus, we discuss distributional results for this scheme and establish exact distributional results in likelihood inference for exponentially distributed lifetimes. In particular, we obtain the exact distribution of the maximum likelihood estimator (MLE). Further, the stochastic monotonicity of the MLE is verified in order to construct exact confidence intervals for both the scale parameter and the reliability.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号