首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4486篇
  免费   144篇
  国内免费   17篇
管理学   195篇
民族学   3篇
人口学   18篇
丛书文集   79篇
理论方法论   40篇
综合类   694篇
社会学   30篇
统计学   3588篇
  2024年   2篇
  2023年   25篇
  2022年   40篇
  2021年   32篇
  2020年   72篇
  2019年   156篇
  2018年   176篇
  2017年   265篇
  2016年   134篇
  2015年   137篇
  2014年   198篇
  2013年   1282篇
  2012年   451篇
  2011年   193篇
  2010年   176篇
  2009年   156篇
  2008年   144篇
  2007年   129篇
  2006年   119篇
  2005年   94篇
  2004年   114篇
  2003年   67篇
  2002年   58篇
  2001年   63篇
  2000年   56篇
  1999年   56篇
  1998年   52篇
  1997年   41篇
  1996年   16篇
  1995年   16篇
  1994年   19篇
  1993年   15篇
  1992年   18篇
  1991年   5篇
  1990年   13篇
  1989年   9篇
  1988年   4篇
  1987年   5篇
  1986年   5篇
  1985年   2篇
  1984年   9篇
  1983年   4篇
  1982年   5篇
  1981年   2篇
  1980年   5篇
  1979年   2篇
  1978年   1篇
  1976年   2篇
  1975年   2篇
排序方式: 共有4647条查询结果,搜索用时 31 毫秒
91.
Factor models, structural equation models (SEMs) and random-effect models share the common feature that they assume latent or unobserved random variables. Factor models and SEMs allow well developed procedures for a rich class of covariance models with many parameters, while random-effect models allow well developed procedures for non-normal models including heavy-tailed distributions for responses and random effects. In this paper, we show how these two developments can be combined to result in an extremely rich class of models, which can be beneficial to both areas. A new fitting procedures for binary factor models and a robust estimation approach for continuous factor models are proposed.  相似文献   
92.
The additive Cox model is flexible and powerful for modelling the dynamic changes of regression coefficients in the survival analysis. This paper is concerned with feature screening for the additive Cox model with ultrahigh-dimensional covariates. The proposed screening procedure can effectively identify active predictors. That is, with probability tending to one, the selected variable set includes the actual active predictors. In order to carry out the proposed procedure, we propose an effective algorithm and establish the ascent property of the proposed algorithm. We further prove that the proposed procedure possesses the sure screening property. Furthermore, we examine the finite sample performance of the proposed procedure via Monte Carlo simulations, and illustrate the proposed procedure by a real data example.  相似文献   
93.
Estimation in the multivariate context when the number of observations available is less than the number of variables is a classical theoretical problem. In order to ensure estimability, one has to assume certain constraints on the parameters. A method for maximum likelihood estimation under constraints is proposed to solve this problem. Even in the extreme case where only a single multivariate observation is available, this may provide a feasible solution. It simultaneously provides a simple, straightforward methodology to allow for specific structures within and between covariance matrices of several populations. This methodology yields exact maximum likelihood estimates.  相似文献   
94.
For survival endpoints in subgroup selection, a score conversion model is often used to convert the set of biomarkers for each patient into a univariate score and using the median of the univariate scores to divide the patients into biomarker‐positive and biomarker‐negative subgroups. However, this may lead to bias in patient subgroup identification regarding the 2 issues: (1) treatment is equally effective for all patients and/or there is no subgroup difference; (2) the median value of the univariate scores as a cutoff may be inappropriate if the sizes of the 2 subgroups are differ substantially. We utilize a univariate composite score method to convert the set of patient's candidate biomarkers to a univariate response score. We propose applying the likelihood ratio test (LRT) to assess homogeneity of the sampled patients to address the first issue. In the context of identification of the subgroup of responders in adaptive design to demonstrate improvement of treatment efficacy (adaptive power), we suggest that subgroup selection is carried out if the LRT is significant. For the second issue, we utilize a likelihood‐based change‐point algorithm to find an optimal cutoff. Our simulation study shows that type I error generally is controlled, while the overall adaptive power to detect treatment effects sacrifices approximately 4.5% for the simulation designs considered by performing the LRT; furthermore, the change‐point algorithm outperforms the median cutoff considerably when the subgroup sizes differ substantially.  相似文献   
95.
The estimation of the mixtures of regression models is usually based on the normal assumption of components and maximum likelihood estimation of the normal components is sensitive to noise, outliers, or high-leverage points. Missing values are inevitable in many situations and parameter estimates could be biased if the missing values are not handled properly. In this article, we propose the mixtures of regression models for contaminated incomplete heterogeneous data. The proposed models provide robust estimates of regression coefficients varying across latent subgroups even under the presence of missing values. The methodology is illustrated through simulation studies and a real data analysis.  相似文献   
96.
This paper studies the likelihood ratio ordering of parallel systems under multiple-outlier models. We introduce a partial order, the so-called θ-order, and show that the θ-order between the parameter vectors of the parallel systems implies the likelihood ratio order between the systems.  相似文献   
97.
This article analyzes a growing group of fixed T dynamic panel data estimators with a multifactor error structure. We use a unified notational approach to describe these estimators and discuss their properties in terms of deviations from an underlying set of basic assumptions. Furthermore, we consider the extendability of these estimators to practical situations that may frequently arise, such as their ability to accommodate unbalanced panels and common observed factors. Using a large-scale simulation exercise, we consider scenarios that remain largely unexplored in the literature, albeit being of great empirical relevance. In particular, we examine (i) the effect of the presence of weakly exogenous covariates, (ii) the effect of changing the magnitude of the correlation between the factor loadings of the dependent variable and those of the covariates, (iii) the impact of the number of moment conditions on bias and size for GMM estimators, and finally (iv) the effect of sample size. We apply each of these estimators to a crime application using a panel data set of local government authorities in New South Wales, Australia; we find that the results bear substantially different policy implications relative to those potentially derived from standard dynamic panel GMM estimators. Thus, our study may serve as a useful guide to practitioners who wish to allow for multiplicative sources of unobserved heterogeneity in their model.  相似文献   
98.
PCORnet, the National Patient-Centered Clinical Research Network, seeks to establish a robust national health data network for patient-centered comparative effectiveness research. This article reports the results of a PCORnet survey designed to identify the ethics and regulatory challenges anticipated in network implementation. A 12-item online survey was developed by leadership of the PCORnet Ethics and Regulatory Task Force; responses were collected from the 29 PCORnet networks. The most pressing ethics issues identified related to informed consent, patient engagement, privacy and confidentiality, and data sharing. High priority regulatory issues included IRB coordination, privacy and confidentiality, informed consent, and data sharing. Over 150 IRBs and five different approaches to managing multisite IRB review were identified within PCORnet. Further empirical and scholarly work, as well as practical and policy guidance, is essential if important initiatives that rely on comparative effectiveness research are to move forward.  相似文献   
99.
We introduce and study general mathematical properties of a new generator of continuous distributions with one extra parameter called the generalized odd half-Cauchy family. We present some special models and investigate the asymptotics and shapes. The new density function can be expressed as a linear mixture of exponentiated densities based on the same baseline distribution. We derive a power series for the quantile function. We discuss the estimation of the model parameters by maximum likelihood and prove empirically the flexibility of the new family by means of two real data sets.  相似文献   
100.
In this paper, we investigate four existing and three new confidence interval estimators for the negative binomial proportion (i.e., proportion under inverse/negative binomial sampling). An extensive and systematic comparative study among these confidence interval estimators through Monte Carlo simulations is presented. The performance of these confidence intervals are evaluated in terms of their coverage probabilities and expected interval widths. Our simulation studies suggest that the confidence interval estimator based on saddlepoint approximation is more appealing for large coverage levels (e.g., nominal level≤1% ) whereas the score confidence interval estimator is more desirable for those commonly used coverage levels (e.g., nominal level>1% ). We illustrate these confidence interval construction methods with a real data set from a maternal congenital heart disease study.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号