首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1299篇
  免费   35篇
  国内免费   1篇
管理学   103篇
民族学   1篇
人口学   13篇
丛书文集   22篇
理论方法论   40篇
综合类   164篇
社会学   151篇
统计学   841篇
  2024年   2篇
  2023年   9篇
  2022年   7篇
  2021年   10篇
  2020年   26篇
  2019年   31篇
  2018年   51篇
  2017年   78篇
  2016年   45篇
  2015年   28篇
  2014年   54篇
  2013年   352篇
  2012年   75篇
  2011年   38篇
  2010年   45篇
  2009年   36篇
  2008年   39篇
  2007年   45篇
  2006年   33篇
  2005年   43篇
  2004年   39篇
  2003年   29篇
  2002年   26篇
  2001年   28篇
  2000年   27篇
  1999年   20篇
  1998年   20篇
  1997年   17篇
  1996年   12篇
  1995年   9篇
  1994年   5篇
  1993年   3篇
  1992年   8篇
  1991年   4篇
  1990年   4篇
  1989年   7篇
  1988年   4篇
  1987年   1篇
  1986年   5篇
  1985年   4篇
  1984年   6篇
  1983年   2篇
  1982年   3篇
  1981年   3篇
  1977年   1篇
  1976年   1篇
排序方式: 共有1335条查询结果,搜索用时 15 毫秒
21.
Outlier detection algorithms are intimately connected with robust statistics that down‐weight some observations to zero. We define a number of outlier detection algorithms related to the Huber‐skip and least trimmed squares estimators, including the one‐step Huber‐skip estimator and the forward search. Next, we review a recently developed asymptotic theory of these. Finally, we analyse the gauge, the fraction of wrongly detected outliers, for a number of outlier detection algorithms and establish an asymptotic normal and a Poisson theory for the gauge.  相似文献   
22.
The standard frequency domain approximation to the Gaussian likelihood of a sample from an ARMA process is considered. The Newton-Raphson and Gauss-Newton numerical maximisation algorithms are evaluated for this approximate likelihood and the relationships between these algorithms and those of Akaike and Hannan explored. In particular it is shown that Hannan's method has certain computational advantages compared to the other spectral estimation methods considered  相似文献   
23.
The estimation of incremental cost–effectiveness ratio (ICER) has received increasing attention recently. It is expressed in terms of the ratio of the change in costs of a therapeutic intervention to the change in the effects of the intervention. Despite the intuitive interpretation of ICER as an additional cost per additional benefit unit, it is a challenge to estimate the distribution of a ratio of two stochastically dependent distributions. A vast literature regarding the statistical methods of ICER has developed in the past two decades, but none of these methods provide an unbiased estimator. Here, to obtain the unbiased estimator of the cost–effectiveness ratio (CER), the zero intercept of the bivariate normal regression is assumed. In equal sample sizes, the Iman–Conover algorithm is applied to construct the desired variance–covariance matrix of two random bivariate samples, and the estimation then follows the same approach as CER to obtain the unbiased estimator of ICER. The bootstrapping method with the Iman–Conover algorithm is employed for unequal sample sizes. Simulation experiments are conducted to evaluate the proposed method. The regression-type estimator performs overwhelmingly better than the sample mean estimator in terms of mean squared error in all cases.  相似文献   
24.
The hazard rate (HR) and mean residual lifetime are two of the most practical and best-known functions in biometry, reliability, statistics and life testing. Recently, the reversed HR function is found to have interesting properties useful in additional areas such as censored data and forensic science. For these three biometric functions, we propose testing methods that they take on a known functional form against that they dominate or are dominated by this known form. This goodness-of-fit-type testing is wider in applications and more interesting than the long-standing testing procedures for exponentiality against the monotonicity of these functions or even the change point problems. This is so since we can test against any choice of the survival distribution and not just exponentiality. For this general testing, we present easy to implement tests and generalize them into classes of statistics that could lead to more powerful and efficient testing.  相似文献   
25.
The mean residual life measures the expected remaining life of a subject who has survived up to a particular time. When survival time distribution is highly skewed or heavy tailed, the restricted mean residual life must be considered. In this paper, we propose an additive–multiplicative restricted mean residual life model to study the association between the restricted mean residual life function and potential regression covariates in the presence of right censoring. This model extends the proportional mean residual life model using an additive model as its covariate dependent baseline. For the suggested model, some covariate effects are allowed to be time‐varying. To estimate the model parameters, martingale estimating equations are developed, and the large sample properties of the resulting estimators are established. In addition, to assess the adequacy of the model, we investigate a goodness of fit test that is asymptotically justified. The proposed methodology is evaluated via simulation studies and further applied to a kidney cancer data set collected from a clinical trial.  相似文献   
26.
Interval-valued variables have become very common in data analysis. Up until now, symbolic regression mostly approaches this type of data from an optimization point of view, considering neither the probabilistic aspects of the models nor the nonlinear relationships between the interval response and the interval predictors. In this article, we formulate interval-valued variables as bivariate random vectors and introduce the bivariate symbolic regression model based on the generalized linear models theory which provides much-needed exibility in practice. Important inferential aspects are investigated. Applications to synthetic and real data illustrate the usefulness of the proposed approach.  相似文献   
27.
This article analyses diffusion-type processes from a new point-of-view. Consider two statistical hypotheses on a diffusion process. We do not use a classical test to reject or accept one hypothesis using the Neyman–Pearson procedure and do not involve Bayesian approach. As an alternative, we propose using a likelihood paradigm to characterizing the statistical evidence in support of these hypotheses. The method is based on evidential inference introduced and described by Royall [Royall R. Statistical evidence: a likelihood paradigm. London: Chapman and Hall; 1997]. In this paper, we extend the theory of Royall to the case when data are observations from a diffusion-type process instead of iid observations. The empirical distribution of likelihood ratio is used to formulate the probability of strong, misleading and weak evidences. Since the strength of evidence can be affected by the sampling characteristics, we present a simulation study that demonstrates these effects. Also we try to control misleading evidence and reduce them by adjusting these characteristics. As an illustration, we apply the method to the Microsoft stock prices.  相似文献   
28.
In this article, we consider a nonparametric regression model with replicated observations based on the dependent error’s structure, for exhibiting dependence among the units. The wavelet procedures are developed to estimate the regression function. The moment consistency, the strong consistency, strong convergence rate and asymptotic normality of wavelet estimator are established under suitable conditions. A simulation study is undertaken to assess the finite sample performance of the proposed method.  相似文献   
29.
In this paper, we investigate the effect of a cold standby component on the mean residual life (MRL) of a system. When the system fails, a cold standby component is immediately put in operation. We particularly focus on the coherent systems in which, after putting the standby component into operation, the failure of the system is due to the next component failure. For these systems, we define MRL functions and obtain their explicit expressions. Also some stochastic ordering results are provided. Such systems include k-out-of-n systems. Hence, our results extend some results in literature.  相似文献   
30.
The purpose of acceptance sampling is to develop decision rules to accept or reject production lots based on sample data. When testing is destructive or expensive, dependent sampling procedures cumulate results from several preceding lots. This chaining of past lot results reduces the required size of the samples. A large part of these procedures only chain past lot results when defects are found in the current sample. However, such selective use of past lot results only achieves a limited reduction of sample sizes. In this article, a modified approach for chaining past lot results is proposed that is less selective in its use of quality history and, as a result, requires a smaller sample size than the one required for commonly used dependent sampling procedures, such as multiple dependent sampling plans and chain sampling plans of Dodge. The proposed plans are applicable for inspection by attributes and inspection by variables. Several properties of their operating characteristic-curves are derived, and search procedures are given to select such modified chain sampling plans by using the two-point method.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号