首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6084篇
  免费   226篇
  国内免费   113篇
管理学   405篇
民族学   13篇
人才学   1篇
人口学   112篇
丛书文集   242篇
理论方法论   133篇
综合类   1560篇
社会学   327篇
统计学   3630篇
  2024年   15篇
  2023年   99篇
  2022年   127篇
  2021年   146篇
  2020年   182篇
  2019年   269篇
  2018年   317篇
  2017年   391篇
  2016年   270篇
  2015年   228篇
  2014年   308篇
  2013年   1069篇
  2012年   404篇
  2011年   260篇
  2010年   220篇
  2009年   220篇
  2008年   233篇
  2007年   235篇
  2006年   209篇
  2005年   214篇
  2004年   186篇
  2003年   139篇
  2002年   101篇
  2001年   117篇
  2000年   97篇
  1999年   67篇
  1998年   65篇
  1997年   46篇
  1996年   29篇
  1995年   32篇
  1994年   25篇
  1993年   16篇
  1992年   20篇
  1991年   12篇
  1990年   11篇
  1989年   5篇
  1988年   8篇
  1987年   7篇
  1986年   6篇
  1985年   6篇
  1984年   6篇
  1983年   5篇
  1980年   1篇
排序方式: 共有6423条查询结果,搜索用时 15 毫秒
71.
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re‐analysis of data from a confirmatory clinical trial in depression. A likelihood‐based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug‐treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was ? 2.79 (p = .013). In placebo multiple imputation, the result was ? 2.17. Results from the other sensitivity analyses ranged from ? 2.21 to ? 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
72.
This paper considers five test statistics for comparing the recovery of a rapid growth‐based enumeration test with respect to the compendial microbiological method using a specific nonserial dilution experiment. The finite sample distributions of these test statistics are unknown, because they are functions of correlated count data. A simulation study is conducted to investigate the type I and type II error rates. For a balanced experimental design, the likelihood ratio test and the main effects analysis of variance (ANOVA) test for microbiological methods demonstrated nominal values for the type I error rate and provided the highest power compared with a test on weighted averages and two other ANOVA tests. The likelihood ratio test is preferred because it can also be used for unbalanced designs. It is demonstrated that an increase in power can only be achieved by an increase in the spiked number of organisms used in the experiment. The power is surprisingly not affected by the number of dilutions or the number of test samples. A real case study is provided to illustrate the theory. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
73.
Nonlinear mixed-effects (NLME) models are flexible enough to handle repeated-measures data from various disciplines. In this article, we propose both maximum-likelihood and restricted maximum-likelihood estimations of NLME models using first-order conditional expansion (FOCE) and the expectation–maximization (EM) algorithm. The FOCE-EM algorithm implemented in the ForStat procedure SNLME is compared with the Lindstrom and Bates (LB) algorithm implemented in both the SAS macro NLINMIX and the S-Plus/R function nlme in terms of computational efficiency and statistical properties. Two realworld data sets an orange tree data set and a Chinese fir (Cunninghamia lanceolata) data set, and a simulated data set were used for evaluation. FOCE-EM converged for all mixed models derived from the base model in the two realworld cases, while LB did not, especially for the models in which random effects are simultaneously considered in several parameters to account for between-subject variation. However, both algorithms had identical estimated parameters and fit statistics for the converged models. We therefore recommend using FOCE-EM in NLME models, particularly when convergence is a concern in model selection.  相似文献   
74.
In this paper, we propose a methodology to analyze longitudinal data through distances between pairs of observations (or individuals) with regard to the explanatory variables used to fit continuous response variables. Restricted maximum-likelihood and generalized least squares are used to estimate the parameters in the model. We applied this new approach to study the effect of gender and exposure on the deviant behavior variable with respect to tolerance for a group of youths studied over a period of 5 years. Were performed simulations where we compared our distance-based method with classic longitudinal analysis with both AR(1) and compound symmetry correlation structures. We compared them under Akaike and Bayesian information criterions, and the relative efficiency of the generalized variance of the errors of each model. We found small gains in the proposed model fit with regard to the classical methodology, particularly in small samples, regardless of variance, correlation, autocorrelation structure and number of time measurements.  相似文献   
75.
The knowledge of the urban air quality represents the first step to face air pollution issues. For the last decades many cities can rely on a network of monitoring stations recording concentration values for the main pollutants. This paper focuses on functional principal component analysis (FPCA) to investigate multiple pollutant datasets measured over time at multiple sites within a given urban area. Our purpose is to extend what has been proposed in the literature to data that are multisite and multivariate at the same time. The approach results to be effective to highlight some relevant statistical features of the time series, giving the opportunity to identify significant pollutants and to know the evolution of their variability along time. The paper also deals with missing value issue. As it is known, very long gap sequences can often occur in air quality datasets, due to long time failures not easily solvable or to data coming from a mobile monitoring station. In the considered dataset, large and continuous gaps are imputed by empirical orthogonal function procedure, after denoising raw data by functional data analysis and before performing FPCA, in order to further improve the reconstruction.  相似文献   
76.
A pivotal characteristic of credit defaults that is ignored by most credit scoring models is the rarity of the event. The most widely used model to estimate the probability of default is the logistic regression model. Since the dependent variable represents a rare event, the logistic regression model shows relevant drawbacks, for example, underestimation of the default probability, which could be very risky for banks. In order to overcome these drawbacks, we propose the generalized extreme value regression model. In particular, in a generalized linear model (GLM) with the binary-dependent variable we suggest the quantile function of the GEV distribution as link function, so our attention is focused on the tail of the response curve for values close to one. The estimation procedure used is the maximum-likelihood method. This model accommodates skewness and it presents a generalisation of GLMs with complementary log–log link function. We analyse its performance by simulation studies. Finally, we apply the proposed model to empirical data on Italian small and medium enterprises.  相似文献   
77.
The problem of multivariate regression modelling in the presence of heterogeneous data is dealt to address the relevant issue of the influence of such heterogeneity in assessing the linear relations between responses and explanatory variables. In spite of its popularity, clusterwise regression is not designed to identify the linear relationships within ‘homogeneous’ clusters exhibiting internal cohesion and external separation. A within-clusterwise regression is introduced to achieve this aim and, since the possible presence of a linear relation ‘between’ clusters should be also taken into account, a general regression model is introduced to account for both the between-cluster and the within-cluster regression variation. Some decompositions of the variance of the responses accounted for are also given, the least-squares estimation of the parameters is derived, together with an appropriate coordinate descent algorithms and the performance of the proposed methodology is evaluated in different datasets.  相似文献   
78.
We present a novel methodology for a comprehensive statistical analysis of approximately periodic biosignal data. There are two main challenges in such analysis: (1) the automatic extraction (segmentation) of cycles from long, cyclostationary biosignals and (2) the subsequent statistical analysis, which in many cases involves the separation of temporal and amplitude variabilities. The proposed framework provides a principled approach for statistical analysis of such signals, which in turn allows for an efficient cycle segmentation algorithm. This is achieved using a convenient representation of functions called the square-root velocity function (SRVF). The segmented cycles, represented by SRVFs, are temporally aligned using the notion of the Karcher mean, which in turn allows for more efficient statistical summaries of signals. We show the strengths of this method through various disease classification experiments. In the case of myocardial infarction detection and localization, we show that our method compares favorably to methods described in the current literature.  相似文献   
79.
This paper presents estimates for the parameters included in the Block and Basu bivariate lifetime distributions in the presence of covariates and cure fraction, applied to analyze survival data when some individuals may never experience the event of interest and two lifetimes are associated with each unit. A Bayesian procedure is used to get point and confidence intervals for the unknown parameters. Posterior summaries of interest are obtained using standard Markov Chain Monte Carlo methods in rjags package for R software. An illustration of the proposed methodology is given for a Diabetic Retinopathy Study data set.  相似文献   
80.
We construct a mixture distribution including infant, exogenous and Gompertzian/non-Gompertzian senescent mortality. Using mortality data from Swedish females 1751–, we show that this outperforms models without these features, and compare its trends in cohort and period mortality over time. We find an almost complete disappearance of exogenous mortality within the last century of period mortality, with cohort mortality approaching the same limits. Both Gompertzian and non-Gompertzian senescent mortality are consistently present, with the estimated balance between them oscillating constantly. While the parameters of the latter appear to be trending over time, the parameters of the former do not.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号