全文获取类型
收费全文 | 2771篇 |
免费 | 101篇 |
国内免费 | 19篇 |
专业分类
管理学 | 256篇 |
民族学 | 24篇 |
人口学 | 71篇 |
丛书文集 | 179篇 |
理论方法论 | 147篇 |
综合类 | 866篇 |
社会学 | 276篇 |
统计学 | 1072篇 |
出版年
2024年 | 6篇 |
2023年 | 30篇 |
2022年 | 35篇 |
2021年 | 37篇 |
2020年 | 80篇 |
2019年 | 110篇 |
2018年 | 112篇 |
2017年 | 121篇 |
2016年 | 95篇 |
2015年 | 95篇 |
2014年 | 146篇 |
2013年 | 416篇 |
2012年 | 161篇 |
2011年 | 143篇 |
2010年 | 136篇 |
2009年 | 115篇 |
2008年 | 136篇 |
2007年 | 120篇 |
2006年 | 114篇 |
2005年 | 108篇 |
2004年 | 114篇 |
2003年 | 92篇 |
2002年 | 83篇 |
2001年 | 67篇 |
2000年 | 56篇 |
1999年 | 35篇 |
1998年 | 28篇 |
1997年 | 17篇 |
1996年 | 12篇 |
1995年 | 11篇 |
1994年 | 10篇 |
1993年 | 6篇 |
1992年 | 7篇 |
1991年 | 5篇 |
1990年 | 7篇 |
1989年 | 2篇 |
1988年 | 6篇 |
1986年 | 4篇 |
1985年 | 4篇 |
1984年 | 1篇 |
1983年 | 6篇 |
1977年 | 1篇 |
1976年 | 1篇 |
排序方式: 共有2891条查询结果,搜索用时 15 毫秒
1.
Herein, we propose a data-driven test that assesses the lack of fit of nonlinear regression models. The comparison of local linear kernel and parametric fits is the basis of this test, and specific boundary-corrected kernels are not needed at the boundary when local linear fitting is used. Under the parametric null model, the asymptotically optimal bandwidth can be used for bandwidth selection. This selection method leads to the data-driven test that has a limiting normal distribution under the null hypothesis and is consistent against any fixed alternative. The finite-sample property of the proposed data-driven test is illustrated, and the power of the test is compared with that of some existing tests via simulation studies. We illustrate the practicality of the proposed test by using two data sets. 相似文献
2.
Mark S. Pearce Heather O. Dickinson Murray Aitkin Louise Parker 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2002,165(3):523-548
Summary. This study investigates whether there was evidence of increasing risk of still-birth with increasing paternal exposure to ionizing radiation received during employment at the Sellafield nuclear installation before the child was conceived. A significant positive association is found between the total paternal preconceptional exposure to external ionizing radiation and the risk of still-birth (after adjustment for year of birth, social class, birth order and paternal age, odds ratio at 100 mSv 1.24 (95% confidence interval 1.04–1.45)). A summary of the principal scientific findings of this study has been published in the Lancet . This paper describes in detail the statistical methods that were used in the investigation and presents the results in full. 相似文献
3.
通过2个月的运动元记忆训练,检验其对非体育专业学生元记忆水平和动作记忆成绩的影响.结果显示运动元记忆训练能显著提高大学生运动记忆策略和记忆监控水平;结果还表明训练能明显提高大学生的艺术体操和健美操动作记忆成绩(表现在动作数量上),而且艺术体操动作记忆的训练效应大于健美操. 相似文献
4.
Craig H. Mallinckrodt Christopher J. Kaiser John G. Watkin Michael J. Detke Geert Molenberghs Raymond J. Carroll 《Pharmaceutical statistics》2004,3(3):171-186
The last observation carried forward (LOCF) approach is commonly utilized to handle missing values in the primary analysis of clinical trials. However, recent evidence suggests that likelihood‐based analyses developed under the missing at random (MAR) framework are sensible alternatives. The objective of this study was to assess the Type I error rates from a likelihood‐based MAR approach – mixed‐model repeated measures (MMRM) – compared with LOCF when estimating treatment contrasts for mean change from baseline to endpoint (Δ). Data emulating neuropsychiatric clinical trials were simulated in a 4 × 4 factorial arrangement of scenarios, using four patterns of mean changes over time and four strategies for deleting data to generate subject dropout via an MAR mechanism. In data with no dropout, estimates of Δ and SEΔ from MMRM and LOCF were identical. In data with dropout, the Type I error rates (averaged across all scenarios) for MMRM and LOCF were 5.49% and 16.76%, respectively. In 11 of the 16 scenarios, the Type I error rate from MMRM was at least 1.00% closer to the expected rate of 5.00% than the corresponding rate from LOCF. In no scenario did LOCF yield a Type I error rate that was at least 1.00% closer to the expected rate than the corresponding rate from MMRM. The average estimate of SEΔ from MMRM was greater in data with dropout than in complete data, whereas the average estimate of SEΔ from LOCF was smaller in data with dropout than in complete data, suggesting that standard errors from MMRM better reflected the uncertainty in the data. The results from this investigation support those from previous studies, which found that MMRM provided reasonable control of Type I error even in the presence of MNAR missingness. No universally best approach to analysis of longitudinal data exists. However, likelihood‐based MAR approaches have been shown to perform well in a variety of situations and are a sensible alternative to the LOCF approach. MNAR methods can be used within a sensitivity analysis framework to test the potential presence and impact of MNAR data, thereby assessing robustness of results from an MAR method. Copyright © 2004 John Wiley & Sons, Ltd. 相似文献
5.
For a wide variety of applications, experiments are based on units ordered over time or space. Models for these experiments generally may include one or more of: correlations, systematic trends, carryover effects and interference effects. Since the standard optimal block designs may not be efficient in these situations, orthogonal arrays of type I and type II, which were introduced in 1961 by C.R. Rao [Combinatorial arrangements analogous to orthogonal arrays, Sankhya A 23 (1961) 283–286], have been recently used to construct optimal and efficient designs for many of these experiments. Results in this area are unified and the salient features are outlined. 相似文献
6.
Abstract. In this paper, we study the statistical interpretation of forensic DNA mixtures with related contributors in subdivided populations. Compact general formulae for match probabilities are obtained for two situations: a relative of one tested person is an unknown contributor of a DNA mixture; and two related unknowns are contributors. The effect of kinship and population structure is illustrated using a real case example. 相似文献
7.
Craig H. Mallinckrodt John G. Watkin Geert Molenberghs Raymond J. Carroll 《Pharmaceutical statistics》2004,3(3):161-169
Missing data, and the bias they can cause, are an almost ever‐present concern in clinical trials. The last observation carried forward (LOCF) approach has been frequently utilized to handle missing data in clinical trials, and is often specified in conjunction with analysis of variance (LOCF ANOVA) for the primary analysis. Considerable advances in statistical methodology, and in our ability to implement these methods, have been made in recent years. Likelihood‐based, mixed‐effects model approaches implemented under the missing at random (MAR) framework are now easy to implement, and are commonly used to analyse clinical trial data. Furthermore, such approaches are more robust to the biases from missing data, and provide better control of Type I and Type II errors than LOCF ANOVA. Empirical research and analytic proof have demonstrated that the behaviour of LOCF is uncertain, and in many situations it has not been conservative. Using LOCF as a composite measure of safety, tolerability and efficacy can lead to erroneous conclusions regarding the effectiveness of a drug. This approach also violates the fundamental basis of statistics as it involves testing an outcome that is not a physical parameter of the population, but rather a quantity that can be influenced by investigator behaviour, trial design, etc. Practice should shift away from using LOCF ANOVA as the primary analysis and focus on likelihood‐based, mixed‐effects model approaches developed under the MAR framework, with missing not at random methods used to assess robustness of the primary analysis. Copyright © 2004 John Wiley & Sons, Ltd. 相似文献
8.
John D. Emerson David C. Hoaglin Frederick Mosteller 《Statistical Methods and Applications》1993,2(3):269-290
Summary Meta-analyses of sets of clinical trials often combine risk differences from several 2×2 tables according to a random-effects
model. The DerSimonian-Laird random-effects procedure, widely used for estimating the populaton mean risk difference, weights
the risk difference from each primary study inversely proportional to an estimate of its variance (the sum of the between-study
variance and the conditional within-study variance). Because those weights are not independent of the risk differences, however,
the procedure sometimes exhibits bias and unnatural behavior. The present paper proposes a modified weighting scheme that
uses the unconditional within-study variance to avoid this source of bias. The modified procedure has variance closer to that
available from weighting by ideal weights when such weights are known. We studied the modified procedure in extensive simulation
experiments using situations whose parameters resemble those of actual studies in medical research. For comparison we also
included two unbiased procedures, the unweighted mean and a sample-size-weighted mean; their relative variability depends
on the extent of heterogeneity among the primary studies. An example illustrates the application of the procedures to actual
data and the differences among the results.
This research was supported by Grant HS 05936 from the Agency for Health Care Policy and Research to Harvard University. 相似文献
9.
Impacts of complex emergencies or relief interventions have often been evaluated by absolute mortality compared to international standardized mortality rates. A better evaluation would be to compare with local baseline mortality of the affected populations. A projection of population-based survival data into time of emergency or intervention based on information from before the emergency may create a local baseline reference. We find a log-transformed Gaussian time series model where standard errors of the estimated rates are included in the variance to have the best forecasting capacity. However, if time-at-risk during the forecasted period is known then forecasting might be done using a Poisson time series model with overdispersion. Whatever, the standard error of the estimated rates must be included in the variance of the model either in an additive form in a Gaussian model or in a multiplicative form by overdispersion in a Poisson model. Data on which the forecasting is based must be modelled carefully concerning not only calendar-time trends but also periods with excessive frequency of events (epidemics) and seasonal variations to eliminate residual autocorrelation and to make a proper reference for comparison, reflecting changes over time during the emergency. Hence, when modelled properly it is possible to predict a reference to an emergency-affected population based on local conditions. We predicted childhood mortality during the war in Guinea-Bissau 1998-1999. We found an increased mortality in the first half-year of the war and a mortality corresponding to the expected one in the last half-year of the war. 相似文献
10.
Generalized additive models for location, scale and shape 总被引:10,自引:0,他引:10
R. A. Rigby D. M. Stasinopoulos 《Journal of the Royal Statistical Society. Series C, Applied statistics》2005,54(3):507-554
Summary. A general class of statistical models for a univariate response variable is presented which we call the generalized additive model for location, scale and shape (GAMLSS). The model assumes independent observations of the response variable y given the parameters, the explanatory variables and the values of the random effects. The distribution for the response variable in the GAMLSS can be selected from a very general family of distributions including highly skew or kurtotic continuous and discrete distributions. The systematic part of the model is expanded to allow modelling not only of the mean (or location) but also of the other parameters of the distribution of y , as parametric and/or additive nonparametric (smooth) functions of explanatory variables and/or random-effects terms. Maximum (penalized) likelihood estimation is used to fit the (non)parametric models. A Newton–Raphson or Fisher scoring algorithm is used to maximize the (penalized) likelihood. The additive terms in the model are fitted by using a backfitting algorithm. Censored data are easily incorporated into the framework. Five data sets from different fields of application are analysed to emphasize the generality of the GAMLSS class of models. 相似文献