首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1613篇
  免费   68篇
  国内免费   5篇
管理学   97篇
民族学   17篇
人口学   19篇
丛书文集   32篇
理论方法论   38篇
综合类   267篇
社会学   84篇
统计学   1132篇
  2024年   1篇
  2023年   14篇
  2022年   14篇
  2021年   23篇
  2020年   43篇
  2019年   80篇
  2018年   67篇
  2017年   106篇
  2016年   58篇
  2015年   58篇
  2014年   76篇
  2013年   289篇
  2012年   167篇
  2011年   46篇
  2010年   62篇
  2009年   58篇
  2008年   61篇
  2007年   49篇
  2006年   53篇
  2005年   55篇
  2004年   45篇
  2003年   41篇
  2002年   37篇
  2001年   40篇
  2000年   34篇
  1999年   28篇
  1998年   20篇
  1997年   6篇
  1996年   6篇
  1995年   7篇
  1994年   7篇
  1993年   8篇
  1992年   10篇
  1991年   4篇
  1990年   6篇
  1989年   2篇
  1988年   1篇
  1986年   1篇
  1983年   1篇
  1977年   1篇
  1976年   1篇
排序方式: 共有1686条查询结果,搜索用时 265 毫秒
931.
The skew-normal model is a class of distributions that extends the Gaussian family by including a skewness parameter. This model presents some inferential problems linked to the estimation of the skewness parameter. In particular its maximum likelihood estimator can be infinite especially for moderate sample sizes and is not clear how to calculate confidence intervals for this parameter. In this work, we show how these inferential problems can be solved if we are interested in the distribution of extreme statistics of two random variables with joint normal distribution. Such situations are not uncommon in applications, especially in medical and environmental contexts, where it can be relevant to estimate the distribution of extreme statistics. A theoretical result, found by Loperfido [7 Loperfido, N. 2002. Statistical implications of selectively reported inferential results. Statist. Probab. Lett., 56: 1322. [Crossref], [Web of Science ®] [Google Scholar]], proves that such extreme statistics have a skew-normal distribution with skewness parameter that can be expressed as a function of the correlation coefficient between the two initial variables. It is then possible, using some theoretical results involving the correlation coefficient, to find approximate confidence intervals for the parameter of skewness. These theoretical intervals are then compared with parametric bootstrap intervals by means of a simulation study. Two applications are given using real data.  相似文献   
932.
The importance of the dispersion parameter in counts occurring in toxicology, biology, clinical medicine, epidemiology, and other similar studies is well known. A couple of procedures for the construction of confidence intervals (CIs) of the dispersion parameter have been investigated, but little attention has been paid to the accuracy of its CIs. In this paper, we introduce the profile likelihood (PL) approach and the hybrid profile variance (HPV) approach for constructing the CIs of the dispersion parameter for counts based on the negative binomial model. The non-parametric bootstrap (NPB) approach based on the maximum likelihood (ML) estimates of the dispersion parameter is also considered. We then compare our proposed approaches with an asymptotic approach based on the ML and the restricted ML (REML) estimates of the dispersion parameter as well as the parametric bootstrap (PB) approach based on the ML estimates of the dispersion parameter. As assessed by Monte Carlo simulations, the PL approach has the best small-sample performance, followed by the REML, HPV, NPB, and PB approaches. Three examples to biological count data are presented.  相似文献   
933.
934.
Lesion count observed on brain magnetic resonance imaging scan is a common end point in phase 2 clinical trials evaluating therapeutic treatment in relapsing remitting multiple sclerosis (MS). This paper compares the performances of Poisson, zero‐inflated poisson (ZIP), negative binomial (NB), and zero‐inflated NB (ZINB) mixed‐effects regression models in fitting lesion count data in a clinical trial evaluating the efficacy and safety of fingolimod in comparison with placebo, in MS. The NB and ZINB models prove to be superior to the Poisson and ZIP models. We discuss the advantages and limitations of zero‐inflated models in the context of MS treatment. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
935.
Frequently, count data obtained from dilution assays are subject to an upper detection limit, and as such, data obtained from these assays are usually censored. Also, counts from the same subject at different dilution levels are correlated. Ignoring the censoring and the correlation may provide unreliable and misleading results. Therefore, any meaningful data modeling requires that the censoring and the correlation be simultaneously addressed. Such comprehensive approaches of modeling censoring and correlation are not widely used in the analysis of dilution assays data. Traditionally, these data are analyzed using a general linear model on a logarithmic-transformed average count per subject. However, this traditional approach ignores the between-subject variability and risks, providing inconsistent results and unreliable conclusions. In this paper, we propose the use of a censored negative binomial model with normal random effects to analyze such data. This model addresses, in addition to the censoring and the correlation, any overdispersion that may be present in count data. The model is shown to be widely accessible through the use of several modern statistical software.  相似文献   
936.
The social environment influences health outcomes for older adults and could be an important target for interventions to reduce costly medical care. We sought to understand which elements of the social environment distinguish communities that achieve lower health care utilization and costs from communities that experience higher health care utilization and costs for older adults with complex needs. We used a sequential explanatory mixed methods approach. We classified community performance based on three outcomes: rate of hospitalizations for ambulatory care sensitive conditions, all-cause risk-standardized hospital readmission rates, and Medicare spending per beneficiary. We conducted in-depth interviews with key informants (N = 245) from organizations providing health or social services. Higher performing communities were distinguished by several aspects of social environment, and these features were lacking in lower performing communities: 1) strong informal support networks; 2) partnerships between faith-based organizations and health care and social service organizations; and 3) grassroots organizing and advocacy efforts. Higher performing communities share similar social environmental features that complement the work of health care and social service organizations. Many of the supportive features and programs identified in the higher performing communities were developed locally and with limited governmental funding, providing opportunities for improvement.  相似文献   
937.
Mixed‐effects models for repeated measures (MMRM) analyses using the Kenward‐Roger method for adjusting standard errors and degrees of freedom in an “unstructured” (UN) covariance structure are increasingly becoming common in primary analyses for group comparisons in longitudinal clinical trials. We evaluate the performance of an MMRM‐UN analysis using the Kenward‐Roger method when the variance of outcome between treatment groups is unequal. In addition, we provide alternative approaches for valid inferences in the MMRM analysis framework. Two simulations are conducted in cases with (1) unequal variance but equal correlation between the treatment groups and (2) unequal variance and unequal correlation between the groups. Our results in the first simulation indicate that MMRM‐UN analysis using the Kenward‐Roger method based on a common covariance matrix for the groups yields notably poor coverage probability (CP) with confidence intervals for the treatment effect when both the variance and the sample size between the groups are disparate. In addition, even when the randomization ratio is 1:1, the CP will fall seriously below the nominal confidence level if a treatment group with a large dropout proportion has a larger variance. Mixed‐effects models for repeated measures analysis with the Mancl and DeRouen covariance estimator shows relatively better performance than the traditional MMRM‐UN analysis method. In the second simulation, the traditional MMRM‐UN analysis leads to bias of the treatment effect and yields notably poor CP. Mixed‐effects models for repeated measures analysis fitting separate UN covariance structures for each group provides an unbiased estimate of the treatment effect and an acceptable CP. We do not recommend MMRM‐UN analysis using the Kenward‐Roger method based on a common covariance matrix for treatment groups, although it is frequently seen in applications, when heteroscedasticity between the groups is apparent in incomplete longitudinal data.  相似文献   
938.
Two-parameter Gompertz distribution has been introduced as a lifetime model for reliability inference recently. In this paper, the Gompertz distribution is proposed for the baseline lifetimes of components in a composite system. In this composite system, failure of a component induces increased load on the surviving components and thus increases component hazard rate via a power-trend process. Point estimates of the composite system parameters are obtained by the method of maximum likelihood. Interval estimates of the baseline survival function are obtained by using the maximum-likelihood estimator via a bootstrap percentile method. Two parametric bootstrap procedures are proposed to test whether the hazard rate function changes with the number of failed components. Intensive simulations are carried out to evaluate the performance of the proposed estimation procedure.  相似文献   
939.
This article considers the detection of changes in persistence in heavy-tailed series. We adopt a Dickey–Fuller-type ratio statistic and derive its null asymptotic distribution of test statistic. We find that the asymptotic distribution depends on the stable index, which is often typically unknown and difficult to estimate. Therefore, the block bootstrap method is proposed to detect changes without estimating κ. The empirical sizes and power values are investigated to show that the block bootstrap test is valid. Finally, the validity of the method is demonstrated by analyzing the exchange rate of RMB and US dollars.  相似文献   
940.
基于耕地利用生态效率的内涵界定,运用DEA效率混合测度模型和协调发展度模型,分析2001—2016黑龙江垦区耕地利用生态效率及其非效率来源,探究耕地利用生态效率的内部协调性。研究表明:2001—2016年研究区耕地利用生态效率呈波动上升趋势,变化幅度不大,各管理局耕地利用生态效率差异明显;耕地利用效率损失的主要原因为投入非效率、社会产出非效率和环境产出非效率,且投入非效率普遍高于社会产出非效率和环境产出非效率,耕地投入冗余对耕地利用效率的负向影响最大;耕地利用经济和社会效率相对于环境效率滞后,是影响耕地利用生态效率内部协调性的主要原因。认为耕地利用生态效率是耕地利用经济、社会和环境效率的综合体现,并从内生性角度分析耕地利用非效率来源及其变化趋势,为提高耕地利用效率提供有效指导,即通过优化研究区耕地投入结构,减少耕地投入冗余,提升其耕地利用经济和社会效率,以实现耕地利用经济、社会和环境效率的协调发展。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号