首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10615篇
  免费   5篇
管理学   1543篇
民族学   99篇
人才学   1篇
人口学   2435篇
理论方法论   509篇
综合类   287篇
社会学   4515篇
统计学   1231篇
  2024年   1篇
  2023年   3篇
  2022年   4篇
  2021年   5篇
  2020年   10篇
  2019年   13篇
  2018年   1658篇
  2017年   1660篇
  2016年   1082篇
  2015年   37篇
  2014年   39篇
  2013年   66篇
  2012年   340篇
  2011年   1152篇
  2010年   1049篇
  2009年   787篇
  2008年   822篇
  2007年   1005篇
  2006年   3篇
  2005年   227篇
  2004年   257篇
  2003年   210篇
  2002年   82篇
  2001年   8篇
  2000年   16篇
  1999年   10篇
  1998年   4篇
  1997年   2篇
  1996年   29篇
  1994年   4篇
  1993年   2篇
  1992年   4篇
  1991年   2篇
  1989年   1篇
  1988年   8篇
  1985年   1篇
  1982年   1篇
  1981年   1篇
  1979年   2篇
  1978年   1篇
  1976年   1篇
  1975年   3篇
  1974年   1篇
  1973年   3篇
  1968年   4篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
111.
Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard function with the idea being to profile out this function before carrying out the estimation of the parameter of interest. In this step one uses a Breslow type estimator to estimate the cumulative baseline hazard function. We focus on the situation where the observed covariates are categorical which allows us to calculate estimators without having to assume anything about the distribution of the covariates. We show that the proposed estimator is consistent and asymptotically normal, and derive a consistent estimator of the variance–covariance matrix that does not involve any choice of a perturbation parameter. Moderate sample size performance of the estimators is investigated via simulation and by application to a real data example.  相似文献   
112.
In this paper, we investigate the problem of determining the relationship, represented by similarity of the homologous gene configuration, between paired circular genomes using a regression analysis. We propose a new regression model for studying two circular genomes, where the Möbius transformation naturally arises and is taken as the link function, and propose the least circular distance estimation method, as an appropriate method for analyzing circular variables. The main utility of the new regression model is in identification of a new angular location of one of a homologous gene pair between two circular genomes, for various types of possible gene mutations, given that of the other gene. Furthermore, we demonstrate the utility of our new regression model for grouping of various genomes based on closeness of their relationship. Using angular locations of homologous genes from the five pairs of circular genomes (Horimoto et al. in Bioinformatics 14:789–802, 1998), the new model is compared with the existing models.  相似文献   
113.
Methods to perform regression on compositional covariates have recently been proposed using isometric log-ratios (ilr) representation of compositional parts. This approach consists of first applying standard regression on ilr coordinates and second, transforming the estimated ilr coefficients into their contrast log-ratio counterparts. This gives easy-to-interpret parameters indicating the relative effect of each compositional part. In this work we present an extension of this framework, where compositional covariate effects are allowed to be smooth in the ilr domain. This is achieved by fitting a smooth function over the multidimensional ilr space, using Bayesian P-splines. Smoothness is achieved by assuming random walk priors on spline coefficients in a hierarchical Bayesian framework. The proposed methodology is applied to spatial data from an ecological survey on a gypsum outcrop located in the Emilia Romagna Region, Italy.  相似文献   
114.
The skew normal distribution of Azzalini (Scand J Stat 12:171–178, 1985) has been found suitable for unimodal density but with some skewness present. Through this article, we introduce a flexible extension of the Azzalini (Scand J Stat 12:171–178, 1985) skew normal distribution based on a symmetric component normal distribution (Gui et al. in J Stat Theory Appl 12(1):55–66, 2013). The proposed model can efficiently capture the bimodality, skewness and kurtosis criteria and heavy-tail property. The paper presents various basic properties of this family of distributions and provides two stochastic representations which are useful for obtaining theoretical properties and to simulate from the distribution. Further, maximum likelihood estimation of the parameters is studied numerically by simulation and the distribution is investigated by carrying out comparative fitting of three real datasets.  相似文献   
115.
In this article, we develop a mixed frequency dynamic factor model in which the disturbances of both the latent common factor and of the idiosyncratic components have time-varying stochastic volatilities. We use the model to investigate business cycle dynamics in the euro area and present three sets of empirical results. First, we evaluate the impact of macroeconomic releases on point and density forecast accuracy and on the width of forecast intervals. Second, we show how our setup allows to make a probabilistic assessment of the contribution of releases to forecast revisions. Third, we examine point and density out of sample forecast accuracy. We find that introducing stochastic volatility in the model contributes to an improvement in both point and density forecast accuracy. Supplementary materials for this article are available online.  相似文献   
116.
This paper proposes a new factor rotation for the context of functional principal components analysis. This rotation seeks to re-express a functional subspace in terms of directions of decreasing smoothness as represented by a generalized smoothing metric. The rotation can be implemented simply and we show on two examples that this rotation can improve the interpretability of the leading components.  相似文献   
117.
Estimation of the time-average variance constant (TAVC) of a stationary process plays a fundamental role in statistical inference for the mean of a stochastic process. Wu (2009) proposed an efficient algorithm to recursively compute the TAVC with \(O(1)\) memory and computational complexity. In this paper, we propose two new recursive TAVC estimators that can compute TAVC estimate with \(O(1)\) computational complexity. One of them is uniformly better than Wu’s estimator in terms of asymptotic mean squared error (MSE) at a cost of slightly higher memory complexity. The other preserves the \(O(1)\) memory complexity and is better then Wu’s estimator in most situations. Moreover, the first estimator is nearly optimal in the sense that its asymptotic MSE is \(2^{10/3}3^{-2} \fallingdotseq 1.12\) times that of the optimal off-line TAVC estimator.  相似文献   
118.
Both approximate Bayesian computation (ABC) and composite likelihood methods are useful for Bayesian and frequentist inference, respectively, when the likelihood function is intractable. We propose to use composite likelihood score functions as summary statistics in ABC in order to obtain accurate approximations to the posterior distribution. This is motivated by the use of the score function of the full likelihood, and extended to general unbiased estimating functions in complex models. Moreover, we show that if the composite score is suitably standardised, the resulting ABC procedure is invariant to reparameterisations and automatically adjusts the curvature of the composite likelihood, and of the corresponding posterior distribution. The method is illustrated through examples with simulated data, and an application to modelling of spatial extreme rainfall data is discussed.  相似文献   
119.
In analyzing interval censored data, a non-parametric estimator is often desired due to difficulties in assessing model fits. Because of this, the non-parametric maximum likelihood estimator (NPMLE) is often the default estimator. However, the estimates for values of interest of the survival function, such as the quantiles, have very large standard errors due to the jagged form of the estimator. By forcing the estimator to be constrained to the class of log concave functions, the estimator is ensured to have a smooth survival estimate which has much better operating characteristics than the unconstrained NPMLE, without needing to specify a parametric family or smoothing parameter. In this paper, we first prove that the likelihood can be maximized under a finite set of parameters under mild conditions, although the log likelihood function is not strictly concave. We then present an efficient algorithm for computing a local maximum of the likelihood function. Using our fast new algorithm, we present evidence from simulated current status data suggesting that the rate of convergence of the log-concave estimator is faster (between \(n^{2/5}\) and \(n^{1/2}\)) than the unconstrained NPMLE (between \(n^{1/3}\) and \(n^{1/2}\)).  相似文献   
120.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号