首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7255篇
  免费   265篇
  国内免费   41篇
管理学   294篇
劳动科学   2篇
民族学   59篇
人口学   58篇
丛书文集   385篇
理论方法论   180篇
综合类   2648篇
社会学   161篇
统计学   3774篇
  2024年   10篇
  2023年   43篇
  2022年   69篇
  2021年   61篇
  2020年   127篇
  2019年   225篇
  2018年   235篇
  2017年   362篇
  2016年   224篇
  2015年   178篇
  2014年   340篇
  2013年   1434篇
  2012年   612篇
  2011年   353篇
  2010年   329篇
  2009年   343篇
  2008年   318篇
  2007年   353篇
  2006年   289篇
  2005年   246篇
  2004年   215篇
  2003年   187篇
  2002年   181篇
  2001年   153篇
  2000年   113篇
  1999年   96篇
  1998年   83篇
  1997年   56篇
  1996年   35篇
  1995年   30篇
  1994年   60篇
  1993年   28篇
  1992年   33篇
  1991年   10篇
  1990年   15篇
  1989年   11篇
  1988年   22篇
  1987年   10篇
  1986年   7篇
  1985年   7篇
  1984年   12篇
  1983年   14篇
  1982年   6篇
  1981年   7篇
  1979年   6篇
  1978年   5篇
  1977年   3篇
  1976年   1篇
  1975年   2篇
  1973年   1篇
排序方式: 共有7561条查询结果,搜索用时 281 毫秒
261.
This paper focuses on bivariate kernel density estimation that bridges the gap between univariate and multivariate applications. We propose a subsampling-extrapolation bandwidth matrix selector that improves the reliability of the conventional cross-validation method. The proposed procedure combines a U-statistic expression of the mean integrated squared error and asymptotic theory, and can be used in both cases of diagonal bandwidth matrix and unconstrained bandwidth matrix. In the subsampling stage, one takes advantage of the reduced variability of estimating the bandwidth matrix at a smaller subsample size m (m < n); in the extrapolation stage, a simple linear extrapolation is used to remove the incurred bias. Simulation studies reveal that the proposed method reduces the variability of the cross-validation method by about 50% and achieves an expected integrated squared error that is up to 30% smaller than that of the benchmark cross-validation. It shows comparable or improved performance compared to other competitors across six distributions in terms of the expected integrated squared error. We prove that the components of the selected bivariate bandwidth matrix have an asymptotic multivariate normal distribution, and also present the relative rate of convergence of the proposed bandwidth selector.  相似文献   
262.
This paper proposes the use of the Bernstein–Dirichlet process prior for a new nonparametric approach to estimating the link function in the single-index model (SIM). The Bernstein–Dirichlet process prior has so far mainly been used for nonparametric density estimation. Here we modify this approach to allow for an approximation of the unknown link function. Instead of the usual Gaussian distribution, the error term is assumed to be asymmetric Laplace distributed which increases the flexibility and robustness of the SIM. To automatically identify truly active predictors, spike-and-slab priors are used for Bayesian variable selection. Posterior computations are performed via a Metropolis-Hastings-within-Gibbs sampler using a truncation-based algorithm for stick-breaking priors. We compare the efficiency of the proposed approach with well-established techniques in an extensive simulation study and illustrate its practical performance by an application to nonparametric modelling of the power consumption in a sewage treatment plant.  相似文献   
263.
In this paper, the kernel density estimator for negatively superadditive dependent random variables is studied. The exponential inequalities and the exponential rate for the kernel estimator of density function with a uniform version, over compact sets are investigated. Also, the optimal bandwidth rate of the estimator is obtained using mean integrated squared error. The results are generalized and used to improve the ones obtained for the case of associated sequences. As an application, FGM sequences that fulfil our assumptions are investigated. Also, the convergence rate of the kernel density estimator is illustrated via a simulation study. Moreover, a real data analysis is presented.  相似文献   
264.
The traditional classification is based on the assumption that distribution of indicator variable X in one class is homogeneous. However, when data in one class comes from heterogeneous distribution, the likelihood ratio of two classes is not unique. In this paper, we construct the classification via an ambiguity criterion for the case of distribution heterogeneity of X in a single class. The separated historical data in each situation are used to estimate the thresholds respectively. The final boundary is chosen as the maximum and minimum thresholds from all situations. Our approach obtains the minimum ambiguity with a high classification accuracy allowing for a precise decision. In addition, nonparametric estimation of the classification region and theoretical properties are derived. Simulation study and real data analysis are reported to demonstrate the effectiveness of our method.  相似文献   
265.
In survival analysis, one way to deal with non-proportional hazards is to model short-term and long-term hazard ratios. The existing model of this nature has no control over how fast the hazard ratio is changing over time. We add a parameter to the existing model to allow the hazard ratio to change over time at different speed. A nonparametric maximum likelihood approach is used to estimate the model parameters. The existing model is a special case of the extended model when the speed parameter is 0, which leads naturally to a way of testing the adequacy of the existing model. Simulation results show that there can be substantial bias in the estimation of the short-term and long-term hazard ratio if the speed parameter is fixed incorrectly at 0 rather than estimated. The extended model is fitted to three real data sets to shed new insights, including the observation that converging hazards does not necessarily imply the odds are proportional.  相似文献   
266.
In this paper, we consider inference of the stress-strength parameter, R, based on two independent Type-II censored samples from exponentiated Fréchet populations with different index parameters. The maximum likelihood and uniformly minimum variance unbiased estimators, exact and asymptotic confidence intervals and hypotheses testing for R are obtained. We conduct a Monte Carlo simulation study to evaluate the performance of these estimators and confidence intervals. Finally, two real data sets are analysed for illustrative purposes.  相似文献   
267.
Benjamin Laumen 《Statistics》2019,53(3):569-600
In this paper, we revisit the progressive Type-I censoring scheme as it has originally been introduced by Cohen [Progressively censored samples in life testing. Technometrics. 1963;5(3):327–339]. In fact, original progressive Type-I censoring proceeds as progressive Type-II censoring but with fixed censoring times instead of failure time based censoring times. Apparently, a time truncation has been added to this censoring scheme by interpreting the final censoring time as a termination time. Therefore, not much work has been done on Cohens's original progressive censoring scheme with fixed censoring times. Thus, we discuss distributional results for this scheme and establish exact distributional results in likelihood inference for exponentially distributed lifetimes. In particular, we obtain the exact distribution of the maximum likelihood estimator (MLE). Further, the stochastic monotonicity of the MLE is verified in order to construct exact confidence intervals for both the scale parameter and the reliability.  相似文献   
268.
In this paper, we consider the estimation of the three determining parameters of the efficient frontier, the expected return, and the variance of the global minimum variance portfolio and the slope parameter, from a Bayesian perspective. Their posterior distribution is derived by assigning the diffuse and the conjugate priors to the mean vector and the covariance matrix of the asset returns and is presented in terms of a stochastic representation. Furthermore, Bayesian estimates together with the standard uncertainties for all three parameters are provided, and their asymptotic distributions are established. All obtained findings are applied to real data, consisting of the returns on assets included into the S&P 500. The empirical properties of the efficient frontier are then examined in detail.  相似文献   
269.
Estimates of population characteristics such as domain means are often expected to follow monotonicity assumptions. Recently, a method to adaptively pool neighbouring domains was proposed, which ensures that the resulting domain mean estimates follow monotone constraints. The method leads to asymptotically valid estimation and inference, and can lead to substantial improvements in efficiency, in comparison with unconstrained domain estimators. However, assuming incorrect shape constraints may lead to biased estimators. Here, we develop the Cone Information Criterion for Survey Data as a diagnostic method to measure monotonicity departures on population domain means. We show that the criterion leads to a consistent methodology that makes an asymptotically correct decision choosing between unconstrained and constrained domain mean estimators. The Canadian Journal of Statistics 47: 315–331; 2019 © 2019 Statistical Society of Canada  相似文献   
270.
Motivated by a recent tuberculosis (TB) study, this paper is concerned with covariates missing not at random (MNAR) and models the potential intracluster correlation by a frailty. We consider the regression analysis of right‐censored event times from clustered subjects under a Cox proportional hazards frailty model and present the semiparametric maximum likelihood estimator (SPMLE) of the model parameters. An easy‐to‐implement pseudo‐SPMLE is then proposed to accommodate more realistic situations using readily available supplementary information on the missing covariates. Algorithms are provided to compute the estimators and their consistent variance estimators. We demonstrate that both the SPMLE and the pseudo‐SPMLE are consistent and asymptotically normal by the arguments based on the theory of modern empirical processes. The proposed approach is examined numerically via simulation and illustrated with an analysis of the motivating TB study data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号