首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   422篇
  免费   33篇
  国内免费   1篇
管理学   5篇
人口学   9篇
丛书文集   7篇
理论方法论   5篇
综合类   47篇
社会学   14篇
统计学   369篇
  2023年   3篇
  2022年   3篇
  2021年   6篇
  2020年   14篇
  2019年   27篇
  2018年   23篇
  2017年   33篇
  2016年   21篇
  2015年   23篇
  2014年   17篇
  2013年   86篇
  2012年   40篇
  2011年   20篇
  2010年   15篇
  2009年   14篇
  2008年   11篇
  2007年   9篇
  2006年   11篇
  2005年   10篇
  2004年   14篇
  2003年   12篇
  2002年   6篇
  2001年   11篇
  2000年   5篇
  1999年   6篇
  1998年   5篇
  1997年   4篇
  1996年   1篇
  1995年   2篇
  1994年   2篇
  1993年   1篇
  1987年   1篇
排序方式: 共有456条查询结果,搜索用时 295 毫秒
1.
Abstract

The problem of testing equality of two multivariate normal covariance matrices is considered. Assuming that the incomplete data are of monotone pattern, a quantity similar to the Likelihood Ratio Test Statistic is proposed. A satisfactory approximation to the distribution of the quantity is derived. Hypothesis testing based on the approximate distribution is outlined. The merits of the test are investigated using Monte Carlo simulation. Monte Carlo studies indicate that the test is very satisfactory even for moderately small samples. The proposed methods are illustrated using an example.  相似文献   
2.
The last observation carried forward (LOCF) approach is commonly utilized to handle missing values in the primary analysis of clinical trials. However, recent evidence suggests that likelihood‐based analyses developed under the missing at random (MAR) framework are sensible alternatives. The objective of this study was to assess the Type I error rates from a likelihood‐based MAR approach – mixed‐model repeated measures (MMRM) – compared with LOCF when estimating treatment contrasts for mean change from baseline to endpoint (Δ). Data emulating neuropsychiatric clinical trials were simulated in a 4 × 4 factorial arrangement of scenarios, using four patterns of mean changes over time and four strategies for deleting data to generate subject dropout via an MAR mechanism. In data with no dropout, estimates of Δ and SEΔ from MMRM and LOCF were identical. In data with dropout, the Type I error rates (averaged across all scenarios) for MMRM and LOCF were 5.49% and 16.76%, respectively. In 11 of the 16 scenarios, the Type I error rate from MMRM was at least 1.00% closer to the expected rate of 5.00% than the corresponding rate from LOCF. In no scenario did LOCF yield a Type I error rate that was at least 1.00% closer to the expected rate than the corresponding rate from MMRM. The average estimate of SEΔ from MMRM was greater in data with dropout than in complete data, whereas the average estimate of SEΔ from LOCF was smaller in data with dropout than in complete data, suggesting that standard errors from MMRM better reflected the uncertainty in the data. The results from this investigation support those from previous studies, which found that MMRM provided reasonable control of Type I error even in the presence of MNAR missingness. No universally best approach to analysis of longitudinal data exists. However, likelihood‐based MAR approaches have been shown to perform well in a variety of situations and are a sensible alternative to the LOCF approach. MNAR methods can be used within a sensitivity analysis framework to test the potential presence and impact of MNAR data, thereby assessing robustness of results from an MAR method. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   
3.
Summary.  Non-ignorable missing data, a serious problem in both clinical trials and observational studies, can lead to biased inferences. Quality-of-life measures have become increasingly popular in clinical trials. However, these measures are often incompletely observed, and investigators may suspect that missing quality-of-life data are likely to be non-ignorable. Although several recent references have addressed missing covariates in survival analysis, they all required the assumption that missingness is at random or that all covariates are discrete. We present a method for estimating the parameters in the Cox proportional hazards model when missing covariates may be non-ignorable and continuous or discrete. Our method is useful in reducing the bias and improving efficiency in the presence of missing data. The methodology clearly specifies assumptions about the missing data mechanism and, through sensitivity analysis, helps investigators to understand the potential effect of missing data on study results.  相似文献   
4.
Missing data, and the bias they can cause, are an almost ever‐present concern in clinical trials. The last observation carried forward (LOCF) approach has been frequently utilized to handle missing data in clinical trials, and is often specified in conjunction with analysis of variance (LOCF ANOVA) for the primary analysis. Considerable advances in statistical methodology, and in our ability to implement these methods, have been made in recent years. Likelihood‐based, mixed‐effects model approaches implemented under the missing at random (MAR) framework are now easy to implement, and are commonly used to analyse clinical trial data. Furthermore, such approaches are more robust to the biases from missing data, and provide better control of Type I and Type II errors than LOCF ANOVA. Empirical research and analytic proof have demonstrated that the behaviour of LOCF is uncertain, and in many situations it has not been conservative. Using LOCF as a composite measure of safety, tolerability and efficacy can lead to erroneous conclusions regarding the effectiveness of a drug. This approach also violates the fundamental basis of statistics as it involves testing an outcome that is not a physical parameter of the population, but rather a quantity that can be influenced by investigator behaviour, trial design, etc. Practice should shift away from using LOCF ANOVA as the primary analysis and focus on likelihood‐based, mixed‐effects model approaches developed under the MAR framework, with missing not at random methods used to assess robustness of the primary analysis. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   
5.
Oiler, Gomez & Calle (2004) give a constant sum condition for processes that generate interval‐censored lifetime data. They show that in models satisfying this condition, it is possible to estimate non‐parametrically the lifetime distribution based on a well‐known simplified likelihood. The author shows that this constant‐sum condition is equivalent to the existence of an observation process that is independent of lifetimes and which gives the same probability distribution for the observed data as the underlying true process.  相似文献   
6.
Summary The paper deals with missing data and forecasting problems in multivariate time series making use of the Common Components Dynamic Linear Model (DLMCC), presented in Quintana (1985), and West and Harrison (1989). Some results are presented and discussed: exploiting the correlation between series, estimated by the DLMCC, the paper shows as it is possible to update state vector posterior distributions for the unobserved series. This is realized on the base of the updating of the observed series state vectors, for which the usual Kalman filter equations can be applied. An application concerning some Italian private consumption series provides an example of the model capabilities.  相似文献   
7.
The International Conference on Harmonisation guideline ‘Statistical Principles for Clinical Trials’ was adopted by the Committee for Proprietary Medicinal Products (CPMP) in March 1998, and consequently is operational in Europe. Since then more detailed guidance on selected topics has been issued by the CPMP in the form of ‘Points to Consider’ documents. The intent of these was to give guidance particularly to non‐statistical reviewers within regulatory authorities, although of course they also provide a good source of information for pharmaceutical industry statisticians. In addition, the Food and Drug Administration has recently issued a draft guideline on data monitoring committees. In November 2002 a one‐day discussion forum was held in London by Statisticians in the Pharmaceutical Industry (PSI). The aim of the meeting was to discuss how statisticians were responding to some of the issues covered in these new guidelines, and to document consensus views where they existed. The forum was attended by industry, academic and regulatory statisticians. This paper outlines the questions raised, resulting discussions and consensus views reached. It is clear from the guidelines and discussions at the workshop that the statistical analysis strategy must be planned during the design phase of a clinical trial and carefully documented. Once the study is complete the analysis strategy should be thoughtfully executed and the findings reported. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   
8.
An extended single‐index model is considered when responses are missing at random. A three‐step estimation procedure is developed to define an estimator for the single‐index parameter vector by a joint estimating equation. The proposed estimator is shown to be asymptotically normal. An algorithm for computing this estimator is proposed. This algorithm only involves one‐dimensional nonparametric smoothers, thereby avoiding the data sparsity problem caused by high model dimensionality. Some simulation studies are conducted to investigate the finite sample performances of the proposed estimators.  相似文献   
9.
In confirmatory clinical trials, the prespecification of the primary analysis model is a universally accepted scientific principle to allow strict control of the type I error. Consequently, both the ICH E9 guideline and the European Medicines Agency (EMA) guideline on missing data in confirmatory clinical trials require that the primary analysis model is defined unambiguously. This requirement applies to mixed models for longitudinal data handling missing data implicitly. To evaluate the compliance with the EMA guideline, we evaluated the model specifications in those clinical study protocols from development phases II and III submitted between 2015 and 2018 to the Ethics Committee at Hannover Medical School under the German Medicinal Products Act, which planned to use a mixed model for longitudinal data in the confirmatory testing strategy. Overall, 39 trials from different types of sponsors and a wide range of therapeutic areas were evaluated. While nearly all protocols specify the fixed and random effects of the analysis model (95%), only 77% give the structure of the covariance matrix used for modeling the repeated measurements. Moreover, the testing method (36%), the estimation method (28%), the computation method (3%), and the fallback strategy (18%) are given by less than half the study protocols. Subgroup analyses indicate that these findings are universal and not specific to clinical trial phases or size of company. Altogether, our results show that guideline compliance is to various degrees poor and consequently, strict type I error rate control at the intended level is not guaranteed.  相似文献   
10.
In this paper the Bayesian analysis of incomplete categorical data under informative general censoring proposed by Paulino and Pereira (1995) is revisited. That analysis is based on Dirichlet priors and can be applied to any missing data pattern. However, the known properties of the posterior distributions are scarce and therefore severe limitations to the posterior computations remain. Here is shown how a Monte Carlo simulation approach based on an alternative parameterisation can be used to overcome the former computational difficulties. The proposed simulation approach makes available the approximate estimation of general parametric functions and can be implemented in a very straightforward way.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号