首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Historical control trials compare an experimental treatment with a previously conducted control treatment. By assigning all recruited samples to the experimental arm, historical control trials can better identify promising treatments in early phase trials compared with randomized control trials. Existing designs of historical control trials with survival endpoints are based on asymptotic normal distribution. However, it remains unclear whether the asymptotic distribution of the test statistic is close enough to the true distribution given relatively small sample sizes in early phase trials. In this article, we address this question by introducing an exact design approach for exponentially distributed survival endpoints, and compare it with an asymptotic design in both real examples and simulation examples. Simulation results show that the asymptotic test could lead to bias in the sample size estimation. We conclude the proposed exact design should be used in the design of historical control trials.  相似文献   

2.
Real-time polymerase chain reaction (PCR) is reliable quantitative technique in gene expression studies. The statistical analysis of real-time PCR data is quite crucial for results analysis and explanation. The statistical procedures of analyzing real-time PCR data try to determine the slope of regression line and calculate the reaction efficiency. Applications of mathematical functions have been used to calculate the target gene relative to the reference gene(s). Moreover, these statistical techniques compare Ct (threshold cycle) numbers between control and treatments group. There are many different procedures in SAS for real-time PCR data evaluation. In this study, the efficiency of calibrated model and delta delta Ct model have been statistically tested and explained. Several methods were tested to compare control with treatment means of Ct. The methods tested included t-test (parametric test), Wilcoxon test (non-parametric test) and multiple regression. Results showed that applied methods led to similar results and no significant difference was observed between results of gene expression measurement by the relative method.  相似文献   

3.
The intention of this article is to highlight sources of web‐based reference material, courses and software that will aid statisticians and researchers. The article includes websites that: assist in writing a protocol or proposal; link to online statistical textbooks; and provide statistical calculators or links to free statistical software and other guidance documents. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
Sensitivity analysis provides a way to mitigate traditional criticisms of Bayesian statistical decision theory, concerning dependence on subjective inputs. We suggest a general framework for sensitivity analysis allowing for perturbations in both the utility function and the prior distribution. Perturbations are constrained to classes modelling imprecision in judgements The framework discards first definitely bad alternatives; then, identifies alternatives that may share optimality with a current one; and, finally, detects least changes in the inputs leading to changes in ranking. The associated computational problems and their implementation are discussed.  相似文献   

5.
In this article we review the major areas of remote sensing in the Russian literature for the period 1976 to 1985 that use statistical methods to analyze the observed data. For each of the areas, the problems that have been studied and the statistical techniques that have been used are briefly described  相似文献   

6.
This paper proposes a global strategy for statistical analysis of odour influence on the responsiveness of the mammalian olfactory bulb, the first relay of the olfactory pathway. Experiments were performed on 86 mitral cells recorded in 17 anaesthetized freely breathing rats. Five pure odours and their binary mixture were used. The spontaneous activity and odour-evoked responses of the cells were characterized by their temporal distribution of activity along the respiratory cycle, i.e. by cycle-triggered histograms. Several statistical analyses were performed to describe the influence of binary odour mixtures and, especially, to detect a possible dominance of one component of the mixture.  相似文献   

7.
The Zernike polynomials arise in several applications such as optical metrology or image analysis on a circular domain. In the present paper, we determine optimal designs for regression models which are represented by expansions in terms of Zernike polynomials. We consider two estimation methods for the coefficients in these models and determine the corresponding optimal designs. The first one is the classical least squares method and Φ p -optimal designs in the sense of Kiefer [Kiefer, J., 1974, General equivalence theory for optimum designs (approximate theory). Annals of Statistics, 2 849–879.] are derived, which minimize an appropriate functional of the covariance matrix of the least squares estimator. It is demonstrated that optimal designs with respect to Kiefer's Φ p -criteria (p>?∞) are essentially unique and concentrate observations on certain circles in the experimental domain. E-optimal designs have the same structure but it is shown in several examples that these optimal designs are not necessarily uniquely determined. The second method is based on the direct estimation of the Fourier coefficients in the expansion of the expected response in terms of Zernike polynomials and optimal designs minimizing the trace of the covariance matrix of the corresponding estimator are determined. The designs are also compared with the uniform designs on a grid, which is commonly used in this context.  相似文献   

8.
ABSTRACT

Random events such as a production machine breakdown in a manufacturing plant, an equipment failure within a transportation system, a security failure of information system, or any number of different problems may cause supply chain disruption. Although several researchers have focused on supply chain disruptions and have discussed the measures that companies should use to design better supply chains, or study the different ways that could help firms to mitigate the consequences of a supply chain disruption, the lack of an appropriate method to predict time to disruptive events is strongly felt. Based on this need, this paper introduces statistical flowgraph models (SFGMs) for survival analysis in supply chains. SFGMs provide an innovative approach to analyze time-to-event data. Time-to-event data analysis focuses on modeling waiting times until events of interest occur. SFGMs are useful for reducing multistate models into an equivalent binary-state model. Analysis from the SFGM gives an entire waiting time distribution as well as the system reliability (survivor) and hazard functions for any total or partial waiting time. The end results from a SFGM helps to identify the supply chain's strengths, and more importantly, weaknesses. Therefore, the results are a valuable decision support for supply chain managers to predict supply chain behaviors. Examples presented in this paper demonstrate with clarity the applicability of SFGMs to survival analysis in supply chains.  相似文献   

9.
A fully parametric multistate model is explored for the analysis of animal carcinogenicity experiments in which the time of tumour onset is not known. This model does not require assumptions about tumour lethality or cause of death judgements and can be fitted in the absence of sacrifice data. The model is constructed as a three-state model with simple parametric forms for the transition rates. Maximum likelihood methods are used to estimate the transition rates and different treatment groups are compared using likelihood ratio tests. Selection of an appropriate model and methods to assess the fit of the model are illustrated with data from animal experiments. Comparisons with standard methods are made.  相似文献   

10.
This paper describes a statistical method for estimating data envelopment analysis (DEA) score confidence intervals for individual organizations or other entities. This method applies statistical panel data analysis, which provides proven and powerful methodologies for diagnostic testing and for estimation of confidence intervals. DEA scores are tested for violations of the standard statistical assumptions including contemporaneous correlation, serial correlation, heteroskedasticity and the absence of a normal distribution. Generalized least squares statistical models are used to adjust for violations that are present and to estimate valid confidence intervals within which the true efficiency of each individual decision-making unit occurs. This method is illustrated with two sets of panel data, one from large US urban transit systems and the other from a group of US hospital pharmacies.  相似文献   

11.
A block cipher is one of the most common forms of algorithms used for data encryption. This paper describes an efficient set of statistical methods for analysing the security of these algorithms under the black-box approach. The procedures can be fully automated, which provides the designer or user of a block cipher with a useful set of tools for security analysis.  相似文献   

12.
The process of data analysis can be divided into stages. The opinions of 20 statisticians on the considerations that are relevant to this decision have been studied. Nineteen concepts were identified which can be classijied into five groups:the analysis question; characteristics of the data collection; characteristics of the data; conditions for and influences on data analysis; conditions for and influences on statistical consultation. Economists mention considerations different to those mentioned by social scientists. Together the considerations form an exhaustive list of concepts that are relevant to the design of computerized support in statistics.  相似文献   

13.
We investigate the problem of statistical analysis of interval-valued time series data – two nonintersecting real-valued functions, representing lower and upper limits, over a period of time. Specifically, we pay attention to the two concepts of phase (or horizontal) variability and amplitude (or vertical) variability, and propose a phase-amplitude separation method. We view interval-valued time series as elements of a function (Hilbert) space and impose a Riemannian structure on it. We separate phase and amplitude variability in observed interval functions using a metric-based alignment solution. The key idea is to map an interval to a point in R2, view interval-valued time series as parameterized curves in R2, and borrow ideas from elastic shape analysis of planar curves, including PCA, to perform registration, summarization, analysis, and modeling of multiple series. The proposed phase-amplitude separation provides a new way of PCA and modeling for interval-valued time series, and enables shape clustering of interval-valued time series. We apply this framework to three different applications, including finance, meteorology and physiology, proves the effectiveness of proposed methods, and discovers some underlying patterns in the data. Experimental results on simulated data show that our method applies to the point-valued time series.  相似文献   

14.
Drug delivery devices are required to have excellent technical specifications to deliver drugs accurately, and in addition, the devices should provide a satisfactory experience to patients because this can have a direct effect on drug compliance. To compare patients' experience with two devices, cross-over studies with patient-reported outcomes (PRO) as response variables are often used. Because of the strength of cross-over designs, each subject can directly compare the two devices by using the PRO variables, and variables indicating preference (preferring A, preferring B, or no preference) can be easily derived. Traditionally, methods based on frequentist statistics can be used to analyze such preference data, but there are some limitations for the frequentist methods. Recently, Bayesian methods are considered an acceptable method by the US Food and Drug Administration to design and analyze device studies. In this paper, we propose a Bayesian statistical method to analyze the data from preference trials. We demonstrate that the new Bayesian estimator enjoys some optimal properties versus the frequentist estimator.  相似文献   

15.
Urban ecosystems, considered centres of economic, social and cultural development, face a multitude of environmental and socio-economic challenges, which impact on quality of life. Effective management of the urbanization process is believed critical to improving quality of life and realizing sustainable development. The ecosystem perspective provides a holistic approach, needed to address the complexly interconnected issues, which arise from urban development. Central to the mapping and characterization of urban ecosystems is the delineation of their boundaries, which are made less transparent by growing urbanization. This exposes the limitations of a dichotomous approach. An urban intensity index is a critical tool which supports urban ecosystem studies by facilitating analysis of effects along the urban–rural gradient. In this study, Urban Intensity is estimated and ranked from most to least intense for communities across Trinidad and Tobago, using multivariate statistical analysis of physical data from the built environment. This statistically validated index, designed for Trinidad and Tobago, should have wider applicability to other disciplines and countries.  相似文献   

16.
Data input errors can potentially affect statistical inferences, but little research has been published to date on this topic. In the present paper, we report the effect of data input errors on the statistical inferences drawn about population parameters in an empirical study involving 280 students from two Polish universities, namely the Warsaw University of Life Sciences – SGGW and the University of Information Technology and Management in Rzeszow. We found that 28% of the students committed at least one data error. While some of these errors were small and did not have any real effect, a few of them had substantial effects on the statistical inferences drawn about the population parameters.  相似文献   

17.
Data from a weather modification experiment are examined and a number of statistical analyses reported. The validity of earlier inferences is studied as are the utilities of various statistical methods. The experiment is described. The original analysis of North American Weather Consultants, who conducted the experiment, is reviewed. Data summarization is reported. A major approach to analysis is through the use of cloud-physics covari-ates in regression analyses. Finally, a multivariate analysis is discussed. It appears that the covariates may have been affected by treatment (cloud seeding) and that their use is invalid, not only reducing error variances but removing treatment effect. Some recommendations for improved design of similar future experiments are given in a concluding section, including preliminary trial use of blocking by storms.  相似文献   

18.
We present a novel methodology for a comprehensive statistical analysis of approximately periodic biosignal data. There are two main challenges in such analysis: (1) the automatic extraction (segmentation) of cycles from long, cyclostationary biosignals and (2) the subsequent statistical analysis, which in many cases involves the separation of temporal and amplitude variabilities. The proposed framework provides a principled approach for statistical analysis of such signals, which in turn allows for an efficient cycle segmentation algorithm. This is achieved using a convenient representation of functions called the square-root velocity function (SRVF). The segmented cycles, represented by SRVFs, are temporally aligned using the notion of the Karcher mean, which in turn allows for more efficient statistical summaries of signals. We show the strengths of this method through various disease classification experiments. In the case of myocardial infarction detection and localization, we show that our method compares favorably to methods described in the current literature.  相似文献   

19.
The purpose of this study was to predict placement and nonplacement outcomes for mildly handicapped three through five year old children given knowledge of developmental screening test data. Discrete discriminant analysis (Anderson, 1951; Cochran & Hopkins, 1961; Goldstein & Dillon, 1978) was used to classify children into either a placement or nonplacement group using developmental information retrieved from longitudinal Child Find records (1982-89). These records were located at the Florida Diagnostic and Learning Resource System (FDLRS) in Sarasota, Florida and provided usable data for 602 children. The developmental variables included performance on screening test activities from the Comprehensive Identification Process (Zehrbach, 1975), and consisted of: (a) gross motor skills, (b) expressive language skills, and (c) social-emotional skills. These three dichotomously scored developmental variables generated eight mutually exclusive and exhaustive combinations of screening data. Combined with one of three different types of cost-of-misclassification functions, each child in a random cross-validation sample of 100 was classified into one of the two outcome groups minimizing the expected cost of misclassification based on the remaining 502 children. For each cost function designed by the researchers a comparison was made between classifications from the discrete discriminant analysis procedure and actual placement outcomes for the 100 children. A logit analysis and a standard discriminant analysis were likewise conducted using the 502 children and compared with results of the discrete discriminant analysis for selected cost functions.  相似文献   

20.
Most of the linear statistics deal with data lying in a Euclidean space. However, there are many examples, such as DNA molecule topological structures, in which the initial or the transformed data lie in a non-Euclidean space. To get a measure of variability in these situations, the principal component analysis (PCA) is usually performed on a Euclidean tangent space as it cannot be directly implemented on a non-Euclidean space. Instead, principal geodesic analysis (PGA) is a new tool that provides a measure of variability for nonlinear statistics. In this paper, the performance of this new tool is compared with that of the PCA using a real data set representing a DNA molecular structure. It is shown that due to the nonlinearity of space, the PGA explains more variability of the data than the PCA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号