首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10410篇
  免费   0篇
管理学   1502篇
民族学   99篇
人口学   2408篇
理论方法论   482篇
综合类   288篇
社会学   4447篇
统计学   1184篇
  2023年   1篇
  2020年   1篇
  2019年   1篇
  2018年   1657篇
  2017年   1652篇
  2016年   1074篇
  2015年   34篇
  2014年   35篇
  2013年   28篇
  2012年   319篇
  2011年   1145篇
  2010年   1044篇
  2009年   782篇
  2008年   817篇
  2007年   996篇
  2005年   224篇
  2004年   250篇
  2003年   211篇
  2002年   81篇
  2001年   5篇
  2000年   10篇
  1999年   5篇
  1996年   28篇
  1990年   1篇
  1988年   8篇
  1983年   1篇
排序方式: 共有10000条查询结果,搜索用时 38 毫秒
121.
In randomized clinical trials, the log rank test is often used to test the null hypothesis of the equality of treatment-specific survival distributions. In observational studies, however, the ordinary log rank test is no longer guaranteed to be valid. In such studies we must be cautious about potential confounders; that is, the covariates that affect both the treatment assignment and the survival distribution. In this paper, two cases were considered: the first is when it is believed that all the potential confounders are captured in the primary database, and the second case where a substudy is conducted to capture additional confounding covariates. We generalize the augmented inverse probability weighted complete case estimators for treatment-specific survival distribution proposed in Bai et al. (Biometrics 69:830–839, 2013) and develop the log rank type test in both cases. The consistency and double robustness of the proposed test statistics are shown in simulation studies. These statistics are then applied to the data from the observational study that motivated this research.  相似文献   
122.
One important goal in multi-state modelling is to explore information about conditional transition-type-specific hazard rate functions by estimating influencing effects of explanatory variables. This may be performed using single transition-type-specific models if these covariate effects are assumed to be different across transition-types. To investigate whether this assumption holds or whether one of the effects is equal across several transition-types (cross-transition-type effect), a combined model has to be applied, for instance with the use of a stratified partial likelihood formulation. Here, prior knowledge about the underlying covariate effect mechanisms is often sparse, especially about ineffectivenesses of transition-type-specific or cross-transition-type effects. As a consequence, data-driven variable selection is an important task: a large number of estimable effects has to be taken into account if joint modelling of all transition-types is performed. A related but subsequent task is model choice: is an effect satisfactory estimated assuming linearity, or is the true underlying nature strongly deviating from linearity? This article introduces component-wise Functional Gradient Descent Boosting (short boosting) for multi-state models, an approach performing unsupervised variable selection and model choice simultaneously within a single estimation run. We demonstrate that features and advantages in the application of boosting introduced and illustrated in classical regression scenarios remain present in the transfer to multi-state models. As a consequence, boosting provides an effective means to answer questions about ineffectiveness and non-linearity of single transition-type-specific or cross-transition-type effects.  相似文献   
123.
Missing covariate values is a common problem in survival analysis. In this paper we propose a novel method for the Cox regression model that is close to maximum likelihood but avoids the use of the EM-algorithm. It exploits that the observed hazard function is multiplicative in the baseline hazard function with the idea being to profile out this function before carrying out the estimation of the parameter of interest. In this step one uses a Breslow type estimator to estimate the cumulative baseline hazard function. We focus on the situation where the observed covariates are categorical which allows us to calculate estimators without having to assume anything about the distribution of the covariates. We show that the proposed estimator is consistent and asymptotically normal, and derive a consistent estimator of the variance–covariance matrix that does not involve any choice of a perturbation parameter. Moderate sample size performance of the estimators is investigated via simulation and by application to a real data example.  相似文献   
124.
In this paper, we investigate the problem of determining the relationship, represented by similarity of the homologous gene configuration, between paired circular genomes using a regression analysis. We propose a new regression model for studying two circular genomes, where the Möbius transformation naturally arises and is taken as the link function, and propose the least circular distance estimation method, as an appropriate method for analyzing circular variables. The main utility of the new regression model is in identification of a new angular location of one of a homologous gene pair between two circular genomes, for various types of possible gene mutations, given that of the other gene. Furthermore, we demonstrate the utility of our new regression model for grouping of various genomes based on closeness of their relationship. Using angular locations of homologous genes from the five pairs of circular genomes (Horimoto et al. in Bioinformatics 14:789–802, 1998), the new model is compared with the existing models.  相似文献   
125.
Methods to perform regression on compositional covariates have recently been proposed using isometric log-ratios (ilr) representation of compositional parts. This approach consists of first applying standard regression on ilr coordinates and second, transforming the estimated ilr coefficients into their contrast log-ratio counterparts. This gives easy-to-interpret parameters indicating the relative effect of each compositional part. In this work we present an extension of this framework, where compositional covariate effects are allowed to be smooth in the ilr domain. This is achieved by fitting a smooth function over the multidimensional ilr space, using Bayesian P-splines. Smoothness is achieved by assuming random walk priors on spline coefficients in a hierarchical Bayesian framework. The proposed methodology is applied to spatial data from an ecological survey on a gypsum outcrop located in the Emilia Romagna Region, Italy.  相似文献   
126.
The skew normal distribution of Azzalini (Scand J Stat 12:171–178, 1985) has been found suitable for unimodal density but with some skewness present. Through this article, we introduce a flexible extension of the Azzalini (Scand J Stat 12:171–178, 1985) skew normal distribution based on a symmetric component normal distribution (Gui et al. in J Stat Theory Appl 12(1):55–66, 2013). The proposed model can efficiently capture the bimodality, skewness and kurtosis criteria and heavy-tail property. The paper presents various basic properties of this family of distributions and provides two stochastic representations which are useful for obtaining theoretical properties and to simulate from the distribution. Further, maximum likelihood estimation of the parameters is studied numerically by simulation and the distribution is investigated by carrying out comparative fitting of three real datasets.  相似文献   
127.
This paper proposes a new factor rotation for the context of functional principal components analysis. This rotation seeks to re-express a functional subspace in terms of directions of decreasing smoothness as represented by a generalized smoothing metric. The rotation can be implemented simply and we show on two examples that this rotation can improve the interpretability of the leading components.  相似文献   
128.
Estimation of the time-average variance constant (TAVC) of a stationary process plays a fundamental role in statistical inference for the mean of a stochastic process. Wu (2009) proposed an efficient algorithm to recursively compute the TAVC with \(O(1)\) memory and computational complexity. In this paper, we propose two new recursive TAVC estimators that can compute TAVC estimate with \(O(1)\) computational complexity. One of them is uniformly better than Wu’s estimator in terms of asymptotic mean squared error (MSE) at a cost of slightly higher memory complexity. The other preserves the \(O(1)\) memory complexity and is better then Wu’s estimator in most situations. Moreover, the first estimator is nearly optimal in the sense that its asymptotic MSE is \(2^{10/3}3^{-2} \fallingdotseq 1.12\) times that of the optimal off-line TAVC estimator.  相似文献   
129.
Both approximate Bayesian computation (ABC) and composite likelihood methods are useful for Bayesian and frequentist inference, respectively, when the likelihood function is intractable. We propose to use composite likelihood score functions as summary statistics in ABC in order to obtain accurate approximations to the posterior distribution. This is motivated by the use of the score function of the full likelihood, and extended to general unbiased estimating functions in complex models. Moreover, we show that if the composite score is suitably standardised, the resulting ABC procedure is invariant to reparameterisations and automatically adjusts the curvature of the composite likelihood, and of the corresponding posterior distribution. The method is illustrated through examples with simulated data, and an application to modelling of spatial extreme rainfall data is discussed.  相似文献   
130.
In analyzing interval censored data, a non-parametric estimator is often desired due to difficulties in assessing model fits. Because of this, the non-parametric maximum likelihood estimator (NPMLE) is often the default estimator. However, the estimates for values of interest of the survival function, such as the quantiles, have very large standard errors due to the jagged form of the estimator. By forcing the estimator to be constrained to the class of log concave functions, the estimator is ensured to have a smooth survival estimate which has much better operating characteristics than the unconstrained NPMLE, without needing to specify a parametric family or smoothing parameter. In this paper, we first prove that the likelihood can be maximized under a finite set of parameters under mild conditions, although the log likelihood function is not strictly concave. We then present an efficient algorithm for computing a local maximum of the likelihood function. Using our fast new algorithm, we present evidence from simulated current status data suggesting that the rate of convergence of the log-concave estimator is faster (between \(n^{2/5}\) and \(n^{1/2}\)) than the unconstrained NPMLE (between \(n^{1/3}\) and \(n^{1/2}\)).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号