首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
Abstract.  In many spatial and spatial-temporal models, and more generally in models with complex dependencies, it may be too difficult to carry out full maximum-likelihood (ML) analysis. Remedies include the use of pseudo-likelihood (PL) and quasi-likelihood (QL) (also called the composite likelihood). The present paper studies the ML, PL and QL methods for general Markov chain models, partly motivated by the desire to understand the precise behaviour of the PL and QL methods in settings where this can be analysed. We present limiting normality results and compare performances in different settings. For Markov chain models, the PL and QL methods can be seen as maximum penalized likelihood methods. We find that QL is typically preferable to PL, and that it loses very little to ML, while sometimes earning in model robustness. It has also appeal and potential as a modelling tool. Our methods are illustrated for consonant-vowel transitions in poetry and for analysis of DNA sequence evolution-type models.  相似文献   

2.
3.
The problem of interval estimation of the stress–strength reliability involving two independent Weibull distributions is considered. An interval estimation procedure based on the generalized variable (GV) approach is given when the shape parameters are unknown and arbitrary. The coverage probabilities of the GV approach are evaluated by Monte Carlo simulation. Simulation studies show that the proposed generalized variable approach is very satisfactory even for small samples. For the case of equal shape parameter, it is shown that the generalized confidence limits are exact. Some available asymptotic methods for the case of equal shape parameter are described and their coverage probabilities are evaluated using Monte Carlo simulation. Simulation studies indicate that no asymptotic approach based on the likelihood method is satisfactory even for large samples. Applicability of the GV approach for censored samples is also discussed. The results are illustrated using an example.  相似文献   

4.
A unified approach to the provision of exact expressions for inverse moments of positive quadratic forms in normal variables is described, using the method of (essentially) integrating the moment-generating-function (Cressie, Davis, Folks & Policello (1981). A number of special cases, many familiar from the literature, are reviewed; some allow closed form representations, others are expressed in terms of well-known mathematical functions. In the general case, our approach affords representations suitable for numerical computations.  相似文献   

5.
Modelling of the relationship between concentration (PK) and response (PD) plays an important role in drug development. The modelling becomes complicated when the drug concentration and response measurements are not taken simultaneously and/or hysteresis occurs between the response and the concentration. A model‐based approach fits a joint pharmacokinetic (PK) and concentration–response (PK/PD) model, including an effect compartment if necessary, to concentration and response data. However, this approach relies on the PK data being well described by a common PK model. We propose an algorithm for a semi‐parametric approach to fitting nonlinear mixed PK/PD models including an effect compartment using linear interpolation and extrapolation for concentration data. This approach is independent of the PK model, and the algorithm can easily be implemented using SAS PROC NLMIXED. Practical issues in programming and computing are also discussed. The properties of this approach are examined using simulations. This approach is used to analyse data from a study of the PK/PD relationship between insulin and glucose levels. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

6.
An approach to non-linear principal components using radially symmetric kernel basis functions is described. The procedure consists of two steps: a projection of the data set to a reduced dimension using a non-linear transformation whose parameters are determined by the solution of a generalized symmetric eigenvector equation. This is achieved by demanding a maximum variance transformation subject to a normalization condition (Hotelling's approach) and can be related to the homogeneity analysis approach of Gifi through the minimization of a loss function. The transformed variables are the principal components whose values define contours, or more generally hypersurfaces, in the data space. The second stage of the procedure defines the fitting surface, the principal surface, in the data space (again as a weighted sum of kernel basis functions) using the definition of self-consistency of Hastie and Stuetzle. The parameters of this principal surface are determined by a singular value decomposition and crossvalidation is used to obtain the kernel bandwidths. The approach is assessed on four data sets.  相似文献   

7.
This paper concerns the geometric treatment of graphical models using Bayes linear methods. We introduce Bayes linear separation as a second order generalised conditional independence relation, and Bayes linear graphical models are constructed using this property. A system of interpretive and diagnostic shadings are given, which summarise the analysis over the associated moral graph. Principles of local computation are outlined for the graphical models, and an algorithm for implementing such computation over the junction tree is described. The approach is illustrated with two examples. The first concerns sales forecasting using a multivariate dynamic linear model. The second concerns inference for the error variance matrices of the model for sales, and illustrates the generality of our geometric approach by treating the matrices directly as random objects. The examples are implemented using a freely available set of object-oriented programming tools for Bayes linear local computation and graphical diagnostic display.  相似文献   

8.
Three approaches to multivariate estimation for categorical data using randomized response (RR) are described. In the first approach, practical only for 2×2 contingency tables, a multi-proportions design is used. In the second approach, a separate RR trial is used for each variate and it is noted that the multi­variate design matrix of conditional probabilities is given by the Kroneeker product of the univariate design matrices of each trial, provided that the trials are independent of each other in a certain sense. The third approach requires only a single randomization and thus may be viewed as the use of vector response. Finally, a special-purpose bivariate design is presented.  相似文献   

9.
A versatile procedure is described comprising an application of statistical techniques to the analysis of the large, multi‐dimensional data arrays produced by electroencephalographic (EEG) measurements of human brain function. Previous analytical methods have been unable to identify objectively the precise times at which statistically significant experimental effects occur, owing to the large number of variables (electrodes) and small number of subjects, or have been restricted to two‐treatment experimental designs. Many time‐points are sampled in each experimental trial, making adjustment for multiple comparisons mandatory. Given the typically large number of comparisons and the clear dependence structure among time‐points, simple Bonferroni‐type adjustments are far too conservative. A three‐step approach is proposed: (i) summing univariate statistics across variables; (ii) using permutation tests for treatment effects at each time‐point; and (iii) adjusting for multiple comparisons using permutation distributions to control family‐wise error across the whole set of time‐points. Our approach provides an exact test of the individual hypotheses while asymptotically controlling family‐wise error in the strong sense, and can provide tests of interaction and main effects in factorial designs. An application to two experimental data sets from EEG studies is described, but the approach has application to the analysis of spatio‐temporal multivariate data gathered in many other contexts.  相似文献   

10.
Clustered binary data are common in medical research and can be fitted to the logistic regression model with random effects which belongs to a wider class of models called the generalized linear mixed model. The likelihood-based estimation of model parameters often has to handle intractable integration which leads to several estimation methods to overcome such difficulty. The penalized quasi-likelihood (PQL) method is the one that is very popular and computationally efficient in most cases. The expectation–maximization (EM) algorithm allows to estimate maximum-likelihood estimates, but requires to compute possibly intractable integration in the E-step. The variants of the EM algorithm to evaluate the E-step are introduced. The Monte Carlo EM (MCEM) method computes the E-step by approximating the expectation using Monte Carlo samples, while the Modified EM (MEM) method computes the E-step by approximating the expectation using the Laplace's method. All these methods involve several steps of approximation so that corresponding estimates of model parameters contain inevitable errors (large or small) induced by approximation. Understanding and quantifying discrepancy theoretically is difficult due to the complexity of approximations in each method, even though the focus is on clustered binary data. As an alternative competing computational method, we consider a non-parametric maximum-likelihood (NPML) method as well. We review and compare the PQL, MCEM, MEM and NPML methods for clustered binary data via simulation study, which will be useful for researchers when choosing an estimation method for their analysis.  相似文献   

11.
Sample surveys for estimating the abundance of wildlife ungulate populations are considered in a design-based approach. On the basis of previous theoretical results, a two-stage sampling is proposed. In the first stage, some spatial units are selected using Lahiri-Midzuno sampling, while in the second stage, the animal abundance in the selected units is estimated by means of plot sampling performed on the faecal accumulation within the units. The statistical properties of the resulting ratio estimator of abundance are outlined. An application of the proposed method for estimating fallow-deer and roe-deer abundance in Maremma Regional Park is described.  相似文献   

12.
The article describes an operational Bayesian approach to making inferences for the spectral density function for univariate autoregressive processes and for the AR operator of multivariate autoregressive processes. The derivation of the approach is described. Numerical examples, including the Wolfer Sunspot numbers, are used to demonstrate the practical usefulness of the approach.  相似文献   

13.
Summary.  The Sloan digital sky survey is an extremely large astronomical survey that is conducted with the intention of mapping more than a quarter of the sky. Among the data that it is generating are spectroscopic and photometric measurements, both containing information about the red shift of galaxies. The former are precise and easy to interpret but expensive to gather; the latter are far cheaper but correspondingly more difficult to interpret. Recently, Csabai and co-workers have described various calibration techniques aiming to predict red shift from photometric measurements. We investigate what a structured Bayesian approach to the problem can add. In particular, we are interested in providing uncertainty bounds that are associated with the underlying red shifts and the classifications of the galaxies. We find that quite a generic statistical modelling approach, using for the most part standard model ingredients, can compete with much more specific custom-made and highly tuned techniques that are already available in the astronomical literature.  相似文献   

14.
Summary.  A modelling approach for three-dimensional trajectories with particular application to hand reaching motions is described. Bézier curves are defined by control points which have a convenient geometrical interpretation. A fitting method for the control points to trajectory data is described. These fitted control points are then linked to covariates of interest by using a regression model. This allows the prediction of new trajectories and the ability to model the variability in trajectories. The methodology is illustrated with an application to hand trajectory modelling for ergonomics. Motion capture was used to collect a total of about 2000 hand trajectories performed by 20 subjects to a variety of targets. A simple model with strong predictive performance and interpretablility is developed. The use of hand trajectory models in the digital human models for virtual manufacturing applications is discussed.  相似文献   

15.
A partially linear model is a semiparametric regression model that consists of parametric and nonparametric regression components in an additive form. In this article, we propose a partially linear model using a Gaussian process regression approach and consider statistical inference of the proposed model. Based on the proposed model, the estimation procedure is described by posterior distributions of the unknown parameters and model comparisons between parametric representation and semi- and nonparametric representation are explored. Empirical analysis of the proposed model is performed with synthetic data and real data applications.  相似文献   

16.
By using prior knowledge it may be possible to deduce pieces of individual information from a frequency distribution of a population. If the prior information is described by a stochastic model, an information-theoretic approach can be applied in order to judge the possibilities for disclosure. By specifying the stochastic model in various ways it is shown how the decrease in entropy caused by the publication of a frequency distribution can be determined and interpreted. The stochastic models are also used to derive formulae for disclosure risks and expected numbers of disclosures.  相似文献   

17.
A diverse range of non‐cardiovascular drugs are associated with QT interval prolongation, which may be associated with a potentially fatal ventricular arrhythmia known as torsade de pointes. QT interval has been assessed for two recent submissions at GlaxoSmithKline. Meta‐analyses of ECG data from several clinical pharmacology studies were conducted for the two submissions. A general fixed effects meta‐analysis approach using summaries of the individual studies was used to calculate a pooled estimate and 90% confidence interval for the difference between each active dose and placebo following both single and repeat dosing separately. The meta‐analysis approach described provided a pragmatic solution to pooling complex and varied studies, and is a good way of addressing regulatory questions on QTc prolongation. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

18.
Analyses of carcinogenicity experiments involving occult (hidden) tumours are usually based on cause-of-death information or the results of many interim sacrifices. A simple compartmental model is described that does not involve the cause of death. The method of analysis requires only one interim sacrifice, in addition to the usual terminal kill, to ensure that the tumour incidence rates can be estimated. One advantage of the approach is demonstrated in the analysis of glomerulosclerosis following exposure to ionizing radiation. Although the semiparametric model involves fewer parameters, estimates of key functions derived in this analysis are similar to those obtained previously by using a nonparametric method that involves many more parameters.  相似文献   

19.
ABSTRACT

In clustered survival data, the dependence among individual survival times within a cluster has usually been described using copula models and frailty models. In this paper we propose a profile likelihood approach for semiparametric copula models with different cluster sizes. We also propose a likelihood ratio method based on profile likelihood for testing the absence of association parameter (i.e. test of independence) under the copula models, leading to the boundary problem of the parameter space. For this purpose, we show via simulation study that the proposed likelihood ratio method using an asymptotic chi-square mixture distribution performs well as sample size increases. We compare the behaviors of the two models using the profile likelihood approach under a semiparametric setting. The proposed method is demonstrated using two well-known data sets.  相似文献   

20.
The analysis of recurrent failure time data from longitudinal studies can be complicated by the presence of dependent censoring. There has been a substantive literature that has developed based on an artificial censoring device. We explore in this article the connection between this class of methods with truncated data structures. In addition, a new procedure is developed for estimation and inference in a joint model for recurrent events and dependent censoring. Estimation proceeds using a mixed U-statistic based estimating function approach. New resampling-based methods for variance estimation and model checking are also described. The methods are illustrated by application to data from an HIV clinical trial as with a limited simulation study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号