首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3078篇
  免费   72篇
  国内免费   6篇
管理学   73篇
人口学   11篇
丛书文集   24篇
理论方法论   15篇
综合类   273篇
社会学   17篇
统计学   2743篇
  2023年   11篇
  2022年   8篇
  2021年   23篇
  2020年   54篇
  2019年   95篇
  2018年   130篇
  2017年   199篇
  2016年   84篇
  2015年   75篇
  2014年   102篇
  2013年   1125篇
  2012年   245篇
  2011年   76篇
  2010年   79篇
  2009年   83篇
  2008年   74篇
  2007年   66篇
  2006年   54篇
  2005年   79篇
  2004年   57篇
  2003年   46篇
  2002年   53篇
  2001年   51篇
  2000年   34篇
  1999年   42篇
  1998年   45篇
  1997年   26篇
  1996年   12篇
  1995年   15篇
  1994年   6篇
  1993年   9篇
  1992年   7篇
  1991年   5篇
  1990年   10篇
  1989年   7篇
  1988年   10篇
  1987年   6篇
  1986年   3篇
  1985年   10篇
  1984年   8篇
  1983年   11篇
  1982年   5篇
  1981年   1篇
  1980年   4篇
  1979年   2篇
  1978年   2篇
  1977年   3篇
  1976年   1篇
  1975年   2篇
  1973年   1篇
排序方式: 共有3156条查询结果,搜索用时 46 毫秒
41.
Outlier detection algorithms are intimately connected with robust statistics that down‐weight some observations to zero. We define a number of outlier detection algorithms related to the Huber‐skip and least trimmed squares estimators, including the one‐step Huber‐skip estimator and the forward search. Next, we review a recently developed asymptotic theory of these. Finally, we analyse the gauge, the fraction of wrongly detected outliers, for a number of outlier detection algorithms and establish an asymptotic normal and a Poisson theory for the gauge.  相似文献   
42.
Simulations of forest inventory in several populations compared simple random with “quick probability proportional to size” (QPPS) sampling. The latter may be applied in the absence of a list sampling frame and/or prior measurement of the auxiliary variable. The correlation between the auxiliary and target variables required to render QPPS sampling more efficient than simple random sampling varied over the range 0.3–0.6 and was lower when sampling from populations that were skewed to the right. Two possible analytical estimators of the standard error of the estimate of the mean for QPPS sampling were found to be less reliable than bootstrapping.  相似文献   
43.
This work presents a study about the smoothness attained by the methods more frequently used to choose the smoothing parameter in the context of splines: Cross Validation, Generalized Cross Validation, and corrected Akaike and Bayesian Information Criteria, implemented with Penalized Least Squares. It is concluded that the amount of smoothness strongly depends on the length of the series and on the type of underlying trend, while the presence of seasonality even though statistically significant is less relevant. The intrinsic variability of the series is not statistically significant and its effect is taken into account only through the smoothing parameter.  相似文献   
44.
The marginal likelihood can be notoriously difficult to compute, and particularly so in high-dimensional problems. Chib and Jeliazkov employed the local reversibility of the Metropolis–Hastings algorithm to construct an estimator in models where full conditional densities are not available analytically. The estimator is free of distributional assumptions and is directly linked to the simulation algorithm. However, it generally requires a sequence of reduced Markov chain Monte Carlo runs which makes the method computationally demanding especially in cases when the parameter space is large. In this article, we study the implementation of this estimator on latent variable models which embed independence of the responses to the observables given the latent variables (conditional or local independence). This property is employed in the construction of a multi-block Metropolis-within-Gibbs algorithm that allows to compute the estimator in a single run, regardless of the dimensionality of the parameter space. The counterpart one-block algorithm is also considered here, by pointing out the difference between the two approaches. The paper closes with the illustration of the estimator in simulated and real-life data sets.  相似文献   
45.
The primary objective of a multi-regional clinical trial is to investigate the overall efficacy of the drug across regions and evaluate the possibility of applying the overall trial result to some specific region. A challenge arises when there is not enough regional sample size. We focus on the problem of evaluating applicability of a drug to a specific region of interest under the criterion of preserving a certain proportion of the overall treatment effect in the region. We propose a variant of James-Stein shrinkage estimator in the empirical Bayes context for the region-specific treatment effect. The estimator has the features of accommodating the between-region variation and finiteness correction of bias. We also propose a truncated version of the proposed shrinkage estimator to further protect risk in the presence of extreme value of regional treatment effect. Based on the proposed estimator, we provide the consistency assessment criterion and sample size calculation for the region of interest. Simulations are conducted to demonstrate the performance of the proposed estimators in comparison with some existing methods. A hypothetical example is presented to illustrate the application of the proposed method.  相似文献   
46.
We apply the Abramson principle to define adaptive kernel estimators for the intensity function of a spatial point process. We derive asymptotic expansions for the bias and variance under the regime that n independent copies of a simple point process in Euclidean space are superposed. The method is illustrated by means of a simple example and applied to tornado data.  相似文献   
47.
In this article, the least squares (LS) estimates of the parameters of periodic autoregressive (PAR) models are investigated for various distributions of error terms via Monte-Carlo simulation. Beside the Gaussian distribution, this study covers the exponential, gamma, student-t, and Cauchy distributions. The estimates are compared for various distributions via bias and MSE criterion. The effect of other factors are also examined as the non-constancy of model orders, the non-constancy of the variances of seasonal white noise, the period length, and the length of the time series. The simulation results indicate that this method is in general robust for the estimation of AR parameters with respect to the distribution of error terms and other factors. However, the estimates of those parameters were, in some cases, noticeably poor for Cauchy distribution. It is also noticed that the variances of estimates of white noise variances are highly affected by the degree of skewness of the distribution of error terms.  相似文献   
48.
The two parametric distribution functions appearing in the extreme-value theory – the generalized extreme-value distribution and the generalized Pareto distribution – have log-concave densities if the extreme-value index γ∈[?1, 0]. Replacing the order statistics in tail-index estimators by their corresponding quantiles from the distribution function that is based on the estimated log-concave density ? f n leads to novel smooth quantile and tail-index estimators. These new estimators aim at estimating the tail index especially in small samples. Acting as a smoother of the empirical distribution function, the log-concave distribution function estimator reduces estimation variability to a much greater extent than it introduces bias. As a consequence, Monte Carlo simulations demonstrate that the smoothed version of the estimators are well superior to their non-smoothed counterparts, in terms of mean-squared error.  相似文献   
49.
Likelihood ratios (LRs) are used to characterize the efficiency of diagnostic tests. In this paper, we use the classical weighted least squares (CWLS) test procedure, which was originally used for testing the homogeneity of relative risks, for comparing the LRs of two or more binary diagnostic tests. We compare the performance of this method with the relative diagnostic likelihood ratio (rDLR) method and the diagnostic likelihood ratio regression (DLRReg) approach in terms of size and power, and we observe that the performances of CWLS and rDLR are the same when used to compare two diagnostic tests, while DLRReg method has higher type I error rates and powers. We also examine the performances of the CWLS and DLRReg methods for comparing three diagnostic tests in various sample size and prevalence combinations. On the basis of Monte Carlo simulations, we conclude that all of the tests are generally conservative and have low power, especially in settings of small sample size and low prevalence.  相似文献   
50.
Peto and Peto (1972) have studied rank invariant tests to compare two survival curves for right censored data. We apply their tests, including the logrank test and the generalized Wilcoxon test, to left truncated and interval censored data. The significance levels of the tests are approximated by Monte Carlo permutation tests. Simulation studies are conducted to show their size and power under different distributional differences. In particular, the logrank test works well under the Cox proportional hazards alternatives, as for the usual right censored data. The methods are illustrated by the analysis of the Massachusetts Health Care Panel Study dataset.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号