首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3567篇
  免费   98篇
  国内免费   4篇
管理学   137篇
民族学   1篇
人口学   20篇
丛书文集   15篇
理论方法论   2篇
综合类   88篇
社会学   14篇
统计学   3392篇
  2024年   1篇
  2023年   13篇
  2022年   16篇
  2021年   24篇
  2020年   56篇
  2019年   131篇
  2018年   154篇
  2017年   250篇
  2016年   106篇
  2015年   89篇
  2014年   122篇
  2013年   1174篇
  2012年   343篇
  2011年   111篇
  2010年   104篇
  2009年   113篇
  2008年   95篇
  2007年   76篇
  2006年   63篇
  2005年   62篇
  2004年   86篇
  2003年   54篇
  2002年   48篇
  2001年   51篇
  2000年   48篇
  1999年   51篇
  1998年   50篇
  1997年   37篇
  1996年   13篇
  1995年   16篇
  1994年   18篇
  1993年   14篇
  1992年   16篇
  1991年   4篇
  1990年   10篇
  1989年   7篇
  1988年   2篇
  1987年   3篇
  1986年   3篇
  1985年   1篇
  1984年   11篇
  1983年   5篇
  1982年   5篇
  1981年   1篇
  1980年   5篇
  1979年   1篇
  1978年   2篇
  1976年   2篇
  1975年   2篇
排序方式: 共有3669条查询结果,搜索用时 46 毫秒
71.
For survival endpoints in subgroup selection, a score conversion model is often used to convert the set of biomarkers for each patient into a univariate score and using the median of the univariate scores to divide the patients into biomarker‐positive and biomarker‐negative subgroups. However, this may lead to bias in patient subgroup identification regarding the 2 issues: (1) treatment is equally effective for all patients and/or there is no subgroup difference; (2) the median value of the univariate scores as a cutoff may be inappropriate if the sizes of the 2 subgroups are differ substantially. We utilize a univariate composite score method to convert the set of patient's candidate biomarkers to a univariate response score. We propose applying the likelihood ratio test (LRT) to assess homogeneity of the sampled patients to address the first issue. In the context of identification of the subgroup of responders in adaptive design to demonstrate improvement of treatment efficacy (adaptive power), we suggest that subgroup selection is carried out if the LRT is significant. For the second issue, we utilize a likelihood‐based change‐point algorithm to find an optimal cutoff. Our simulation study shows that type I error generally is controlled, while the overall adaptive power to detect treatment effects sacrifices approximately 4.5% for the simulation designs considered by performing the LRT; furthermore, the change‐point algorithm outperforms the median cutoff considerably when the subgroup sizes differ substantially.  相似文献   
72.
The estimation of the mixtures of regression models is usually based on the normal assumption of components and maximum likelihood estimation of the normal components is sensitive to noise, outliers, or high-leverage points. Missing values are inevitable in many situations and parameter estimates could be biased if the missing values are not handled properly. In this article, we propose the mixtures of regression models for contaminated incomplete heterogeneous data. The proposed models provide robust estimates of regression coefficients varying across latent subgroups even under the presence of missing values. The methodology is illustrated through simulation studies and a real data analysis.  相似文献   
73.
This paper studies the likelihood ratio ordering of parallel systems under multiple-outlier models. We introduce a partial order, the so-called θ-order, and show that the θ-order between the parameter vectors of the parallel systems implies the likelihood ratio order between the systems.  相似文献   
74.
This paper presents some powerful omnibus tests for multivariate normality based on the likelihood ratio and the characterizations of the multivariate normal distribution. The power of the proposed tests is studied against various alternatives via Monte Carlo simulations. Simulation studies show our tests compare well with other powerful tests including multivariate versions of the Shapiro–Wilk test and the Anderson–Darling test.  相似文献   
75.
This article analyzes a growing group of fixed T dynamic panel data estimators with a multifactor error structure. We use a unified notational approach to describe these estimators and discuss their properties in terms of deviations from an underlying set of basic assumptions. Furthermore, we consider the extendability of these estimators to practical situations that may frequently arise, such as their ability to accommodate unbalanced panels and common observed factors. Using a large-scale simulation exercise, we consider scenarios that remain largely unexplored in the literature, albeit being of great empirical relevance. In particular, we examine (i) the effect of the presence of weakly exogenous covariates, (ii) the effect of changing the magnitude of the correlation between the factor loadings of the dependent variable and those of the covariates, (iii) the impact of the number of moment conditions on bias and size for GMM estimators, and finally (iv) the effect of sample size. We apply each of these estimators to a crime application using a panel data set of local government authorities in New South Wales, Australia; we find that the results bear substantially different policy implications relative to those potentially derived from standard dynamic panel GMM estimators. Thus, our study may serve as a useful guide to practitioners who wish to allow for multiplicative sources of unobserved heterogeneity in their model.  相似文献   
76.
We introduce and study general mathematical properties of a new generator of continuous distributions with one extra parameter called the generalized odd half-Cauchy family. We present some special models and investigate the asymptotics and shapes. The new density function can be expressed as a linear mixture of exponentiated densities based on the same baseline distribution. We derive a power series for the quantile function. We discuss the estimation of the model parameters by maximum likelihood and prove empirically the flexibility of the new family by means of two real data sets.  相似文献   
77.
In this paper, we investigate four existing and three new confidence interval estimators for the negative binomial proportion (i.e., proportion under inverse/negative binomial sampling). An extensive and systematic comparative study among these confidence interval estimators through Monte Carlo simulations is presented. The performance of these confidence intervals are evaluated in terms of their coverage probabilities and expected interval widths. Our simulation studies suggest that the confidence interval estimator based on saddlepoint approximation is more appealing for large coverage levels (e.g., nominal level≤1% ) whereas the score confidence interval estimator is more desirable for those commonly used coverage levels (e.g., nominal level>1% ). We illustrate these confidence interval construction methods with a real data set from a maternal congenital heart disease study.  相似文献   
78.
Affiliation network is one kind of two-mode social network with two different sets of nodes (namely, a set of actors and a set of social events) and edges representing the affiliation of the actors with the social events. Although a number of statistical models are proposed to analyze affiliation networks, the asymptotic behaviors of the estimator are still unknown or have not been properly explored. In this article, we study an affiliation model with the degree sequence as the exclusively natural sufficient statistic in the exponential family distributions. We establish the uniform consistency and asymptotic normality of the maximum likelihood estimator when the numbers of actors and events both go to infinity. Simulation studies and a real data example demonstrate our theoretical results.  相似文献   
79.
The receiver operating characteristic (ROC) curve is one of the most commonly used methods to compare the diagnostic performance of two or more laboratory or diagnostic tests. In this paper, we propose semi-empirical likelihood based confidence intervals for ROC curves of two populations, where one population is parametric and the other one is non-parametric and both have missing data. After imputing missing values, we derive the semi-empirical likelihood ratio statistic and the corresponding likelihood equations. It is shown that the log-semi-empirical likelihood ratio statistic is asymptotically scaled chi-squared. The estimating equations are solved simultaneously to obtain the estimated lower and upper bounds of semi-empirical likelihood confidence intervals. We conduct extensive simulation studies to evaluate the finite sample performance of the proposed empirical likelihood confidence intervals with various sample sizes and different missing probabilities.  相似文献   
80.
Density estimation for pre-binned data is challenging due to the loss of exact position information of the original observations. Traditional kernel density estimation methods cannot be applied when data are pre-binned in unequally spaced bins or when one or more bins are semi-infinite intervals. We propose a novel density estimation approach using the generalized lambda distribution (GLD) for data that have been pre-binned over a sequence of consecutive bins. This method enjoys the high power of the parametric model and the great shape flexibility of the GLD. The performances of the proposed estimators are benchmarked via simulation studies. Both simulation results and a real data application show that the proposed density estimators work well for data of moderate or large sizes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号