首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   32478篇
  免费   780篇
  国内免费   1篇
管理学   4304篇
民族学   234篇
人口学   4312篇
丛书文集   78篇
教育普及   1篇
理论方法论   2557篇
综合类   507篇
社会学   15844篇
统计学   5422篇
  2023年   173篇
  2022年   116篇
  2021年   175篇
  2020年   475篇
  2019年   637篇
  2018年   2313篇
  2017年   2550篇
  2016年   1756篇
  2015年   563篇
  2014年   750篇
  2013年   4159篇
  2012年   1267篇
  2011年   1847篇
  2010年   1574篇
  2009年   1223篇
  2008年   1399篇
  2007年   1505篇
  2006年   626篇
  2005年   701篇
  2004年   736篇
  2003年   631篇
  2002年   522篇
  2001年   531篇
  2000年   479篇
  1999年   442篇
  1998年   343篇
  1997年   288篇
  1996年   326篇
  1995年   305篇
  1994年   276篇
  1993年   273篇
  1992年   322篇
  1991年   311篇
  1990年   306篇
  1989年   272篇
  1988年   262篇
  1987年   213篇
  1986年   238篇
  1985年   263篇
  1984年   231篇
  1983年   198篇
  1982年   166篇
  1981年   132篇
  1980年   163篇
  1979年   175篇
  1978年   138篇
  1977年   119篇
  1976年   90篇
  1975年   96篇
  1974年   96篇
排序方式: 共有10000条查询结果,搜索用时 546 毫秒
941.
This article describes how a frequentist model averaging approach can be used for concentration–QT analyses in the context of thorough QTc studies. Based on simulations, we have concluded that starting from three candidate model families (linear, exponential, and Emax) the model averaging approach leads to treatment effect estimates that are quite robust with respect to the control of the type I error in nearly all simulated scenarios; in particular, with the model averaging approach, the type I error appears less sensitive to model misspecification than the widely used linear model. We noticed also few differences in terms of performance between the model averaging approach and the more classical model selection approach, but we believe that, despite both can be recommended in practice, the model averaging approach can be more appealing because of some deficiencies of model selection approach pointed out in the literature. We think that a model averaging or model selection approach should be systematically considered for conducting concentration–QT analyses. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
942.
943.
In this paper, we define and study a new notion for the comparison of the hazard rates of two random variables taking into account their mutual dependence. Properties, applications and the comparison for a data set are given.  相似文献   
944.
Data envelopment analysis (DEA) and free disposal hull (FDH) estimators are widely used to estimate efficiency of production. Practitioners use DEA estimators far more frequently than FDH estimators, implicitly assuming that production sets are convex. Moreover, use of the constant returns to scale (CRS) version of the DEA estimator requires an assumption of CRS. Although bootstrap methods have been developed for making inference about the efficiencies of individual units, until now no methods exist for making consistent inference about differences in mean efficiency across groups of producers or for testing hypotheses about model structure such as returns to scale or convexity of the production set. We use central limit theorem results from our previous work to develop additional theoretical results permitting consistent tests of model structure and provide Monte Carlo evidence on the performance of the tests in terms of size and power. In addition, the variable returns to scale version of the DEA estimator is proved to attain the faster convergence rate of the CRS-DEA estimator under CRS. Using a sample of U.S. commercial banks, we test and reject convexity of the production set, calling into question results from numerous banking studies that have imposed convexity assumptions. Supplementary materials for this article are available online.  相似文献   
945.
We estimate two well-known risk measures, the value-at-risk (VAR) and the expected shortfall, conditionally to a functional variable (i.e., a random variable valued in some semi(pseudo)-metric space). We use nonparametric kernel estimation for constructing estimators of these quantities, under general dependence conditions. Theoretical properties are stated whereas practical aspects are illustrated on simulated data: nonlinear functional and GARCH(1,1) models. Some ideas on bandwidth selection using bootstrap are introduced. Finally, an empirical example is given through data of the S&P 500 time series.  相似文献   
946.
This paper proposes a new factor rotation for the context of functional principal components analysis. This rotation seeks to re-express a functional subspace in terms of directions of decreasing smoothness as represented by a generalized smoothing metric. The rotation can be implemented simply and we show on two examples that this rotation can improve the interpretability of the leading components.  相似文献   
947.
This paper introduces a finite mixture of canonical fundamental skew \(t\) (CFUST) distributions for a model-based approach to clustering where the clusters are asymmetric and possibly long-tailed (in: Lee and McLachlan, arXiv:1401.8182 [statME], 2014b). The family of CFUST distributions includes the restricted multivariate skew \(t\) and unrestricted multivariate skew \(t\) distributions as special cases. In recent years, a few versions of the multivariate skew \(t\) (MST) mixture model have been put forward, together with various EM-type algorithms for parameter estimation. These formulations adopted either a restricted or unrestricted characterization for their MST densities. In this paper, we examine a natural generalization of these developments, employing the CFUST distribution as the parametric family for the component distributions, and point out that the restricted and unrestricted characterizations can be unified under this general formulation. We show that an exact implementation of the EM algorithm can be achieved for the CFUST distribution and mixtures of this distribution, and present some new analytical results for a conditional expectation involved in the E-step.  相似文献   
948.
Estimation of the time-average variance constant (TAVC) of a stationary process plays a fundamental role in statistical inference for the mean of a stochastic process. Wu (2009) proposed an efficient algorithm to recursively compute the TAVC with \(O(1)\) memory and computational complexity. In this paper, we propose two new recursive TAVC estimators that can compute TAVC estimate with \(O(1)\) computational complexity. One of them is uniformly better than Wu’s estimator in terms of asymptotic mean squared error (MSE) at a cost of slightly higher memory complexity. The other preserves the \(O(1)\) memory complexity and is better then Wu’s estimator in most situations. Moreover, the first estimator is nearly optimal in the sense that its asymptotic MSE is \(2^{10/3}3^{-2} \fallingdotseq 1.12\) times that of the optimal off-line TAVC estimator.  相似文献   
949.
Both approximate Bayesian computation (ABC) and composite likelihood methods are useful for Bayesian and frequentist inference, respectively, when the likelihood function is intractable. We propose to use composite likelihood score functions as summary statistics in ABC in order to obtain accurate approximations to the posterior distribution. This is motivated by the use of the score function of the full likelihood, and extended to general unbiased estimating functions in complex models. Moreover, we show that if the composite score is suitably standardised, the resulting ABC procedure is invariant to reparameterisations and automatically adjusts the curvature of the composite likelihood, and of the corresponding posterior distribution. The method is illustrated through examples with simulated data, and an application to modelling of spatial extreme rainfall data is discussed.  相似文献   
950.
In analyzing interval censored data, a non-parametric estimator is often desired due to difficulties in assessing model fits. Because of this, the non-parametric maximum likelihood estimator (NPMLE) is often the default estimator. However, the estimates for values of interest of the survival function, such as the quantiles, have very large standard errors due to the jagged form of the estimator. By forcing the estimator to be constrained to the class of log concave functions, the estimator is ensured to have a smooth survival estimate which has much better operating characteristics than the unconstrained NPMLE, without needing to specify a parametric family or smoothing parameter. In this paper, we first prove that the likelihood can be maximized under a finite set of parameters under mild conditions, although the log likelihood function is not strictly concave. We then present an efficient algorithm for computing a local maximum of the likelihood function. Using our fast new algorithm, we present evidence from simulated current status data suggesting that the rate of convergence of the log-concave estimator is faster (between \(n^{2/5}\) and \(n^{1/2}\)) than the unconstrained NPMLE (between \(n^{1/3}\) and \(n^{1/2}\)).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号