首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
One important property of any drug product is its stability over time. Drug stability studies are routinely carried out in the pharmaceutical industry in order to measure the degradation of an active pharmaceutical ingredient of a drug product. One important study objective is to estimate the shelf-life of the drug; the estimated shelf-life is required by the US Food and Drug Administration to be printed on the package label of the drug. This involves a suitable definition of the true shelf-life and the construction of an appropriate estimate of the true shelf-life. In this paper, the true shelf-life Tβ is defined as the time point at which 100β% of all the individual dosage units (e.g. tablets) of the drug have the active ingredient content no less than the lowest acceptable limit L, where β and L are prespecified constants. The value of Tβ depends on the parameters of the assumed degradation model of the active ingredient content and so is unknown. A lower confidence bound T?β for Tβ is then provided and used as the estimated shelf-life of the drug.  相似文献   

2.
For studies with dichotomous outcomes, inverse sampling (also known as negative binomial sampling) is often used when the subjects arrive sequentially, when the underlying response of interest is acute, and/or when the maximum likelihood estimators of some epidemiologic indices are undefined. Although exact unconditional inference has been shown to be appealing, its applicability and popularity is severely hindered by the notorious conservativeness due to the adoption of the maximization principle and by the tedious computing time due to the involvement of infinite summation. In this article, we demonstrate how these obstacles can be overcome by the application of the constrained maximum likelihood estimation and truncated approximation. The present work is motivated by confidence interval construction for the risk difference under inverse sampling. Wald-type and score-type confidence intervals based on inverting two one-sided and one two-sided tests are considered. Monte Carlo simulations are conducted to evaluate the performance of these confidence intervals with respect to empirical coverage probability, empirical confidence width, and empirical left and right non-coverage probabilities. Two examples from a maternal congenital heart disease study and a drug comparison study are used to demonstrate the proposed methodologies.  相似文献   

3.
This article discusses the consistent estimation of the parameters in a linear measurement error model when stochastic linear restrictions on regression coefficients are available. We propose some methodologies to obtain the consistent estimation when either the covariance matrix of the measurement errors or the reliability matrix of independent variables is known. Their finite- and large-sample properties are derived with not necessarily normal errors. A Monte Carlo simulation is carried out to study the the finite properties of the estimators.  相似文献   

4.
This article proposes new methodologies for evaluating economic models’ out-of-sample forecasting performance that are robust to the choice of the estimation window size. The methodologies involve evaluating the predictive ability of forecasting models over a wide range of window sizes. The study shows that the tests proposed in the literature may lack the power to detect predictive ability and might be subject to data snooping across different window sizes if used repeatedly. An empirical application shows the usefulness of the methodologies for evaluating exchange rate models’ forecasting ability.  相似文献   

5.
In this article we study two methodologies which identify and specify canonical form VARMA models. The two methodologies are: (1) an extension of the scalar component methodology which specifies canonical VARMA models by identifying scalar components through canonical correlations analysis; and (2) the Echelon form methodology, which specifies canonical VARMA models through the estimation of Kronecker indices. We compare the actual forms and the methodologies on three levels. Firstly, we present a theoretical comparison. Secondly, we present a Monte Carlo simulation study that compares the performances of the two methodologies in identifying some pre-specified data generating processes. Lastly, we compare the out-of-sample forecast performance of the two forms when models are fitted to real macroeconomic data.  相似文献   

6.
Lu Lin  Yongxin Liu 《Statistics》2017,51(4):745-765
We consider a partially piecewise regression in which the main regression coefficients are constant in all subdomains, but the extraessential regression function is variable in different pieces and is difficult to be estimated. Under this situation, two new regression methodologies are proposed under the criteria of mini-max-risk and mini-mean-risk. The resulting models can describe the regression relations in maximum-risk and mean-risk environments, respectively. A two-stage estimation procedure, together with a composite method, is introduced. The asymptotic normality of the estimators is established, the standard convergence rate and efficiency are achieved. Some unusual features of the new estimators and predictions, and the related variable selection are discussed for a comprehensive comparison. Simulation studies and a real-financial example are given to illustrate the new methodologies.  相似文献   

7.
The estimation of the kurtosis parameter of the underlying distribution plays a central role in many statistical applications. The central theme of the article is to improve the estimation of the kurtosis parameter using a priori information. More specifically, we consider the problem of estimating kurtosis parameter of a multivariate population when some prior information regarding the the parameter is available. The rationale is that the sample estimator of the kurtosis parameter has a large estimation error. In this situation we consider shrinkage and pretest estimation methodologies and reappraise their statistical properties. The estimation based on these strategies yield relatively smaller estimation error in comparison with the sample estimator in the candidate subspace. A large sample theory of the suggested estimators are developed and compared. The results demonstrate that suggested estimators outperform the estimator based on the sample data only in the candidate subspace. In an effort to appreciate the relative behavior of the estimators in a finite sample scenario, a Monte-carlo simulation study is planned and performed. The result of simulation study strongly corroborates the asymptotic result. To illustrate the application of the estimators, some example are showcased based on recently published data.  相似文献   

8.
Abstract

In survival or reliability data analysis, it is often useful to estimate the quantiles of the lifetime distribution, such as the median time to failure. Different nonparametric methods can construct confidence intervals for the quantiles of the lifetime distributions, some of which are implemented in commonly used statistical software packages. We here investigate the performance of different interval estimation procedures under a variety of settings with different censoring schemes. Our main objectives in this paper are to (i) evaluate the performance of confidence intervals based on the transformation approach commonly used in statistical software, (ii) introduce a new density-estimation-based approach to obtain confidence intervals for survival quantiles, and (iii) compare it with the transformation approach. We provide a comprehensive comparative study and offer some useful practical recommendations based on our results. Some numerical examples are presented to illustrate the methodologies developed.  相似文献   

9.
The problem of consistent estimation of regression coefficients in a multivariate linear ultrastructural measurement error model is considered in this article when some additional information on regression coefficients is available a priori. Such additional information is expressible in the form of stochastic linear restrictions. Utilizing stochastic restrictions given a priori, some methodologies are presented to obtain the consistent estimators of regression coefficients under two types of additional information separately, viz., covariance matrix of measurement errors and reliability matrix associated with explanatory variables. The measurement errors are assumed to be not necessarily normally distributed. The asymptotic properties of the proposed estimators are derived and analyzed analytically as well as numerically through a Monte Carlo simulation experiment.  相似文献   

10.
Abstract.  In this paper, we propose a random varying-coefficient model for longitudinal data. This model is different from the standard varying-coefficient model in the sense that the time-varying coefficients are assumed to be subject-specific, and can be considered as realizations of stochastic processes. This modelling strategy allows us to employ powerful mixed-effects modelling techniques to efficiently incorporate the within-subject and between-subject variations in the estimators of time-varying coefficients. Thus, the subject-specific feature of longitudinal data is effectively considered in the proposed model. A backfitting algorithm is proposed to estimate the coefficient functions. Simulation studies show that the proposed estimation methods are more efficient in finite-sample performance compared with the standard local least squares method. An application to an AIDS clinical study is presented to illustrate the proposed methodologies.  相似文献   

11.
When estimating in a practical situation, asymmetric loss functions are preferred over squared error loss functions, as the former is more appropriate than the latter in many estimation problems. We consider here the problem of fixed precision point estimation of a linear parametric function in beta for the multiple linear regression model using asymmetric loss functions. Due to the presence of nuissance parameters, the sample size for the estimation problem is not known beforehand and hence we take the recourse of adaptive multistage sampling methodologies. We discuss here some multistage sampling techniques and compare the performances of these methodologies using simulation runs. The implementation of the codes for our proposed models is accomplished utilizing MATLAB 7.0.1 program run on a Pentium IV machine. Finally, we highlight the significance of such asymmetric loss functions with few practical examples.  相似文献   

12.
Longitudinal data analysis in epidemiological settings is complicated by large multiplicities of short time series and the occurrence of missing observations. To handle such difficulties Rosner & Muñoz (1988) developed a weighted non-linear least squares algorithm for estimating parameters for first-order autoregressive (AR1) processes with time-varying covariates. This method proved efficient when compared to complete case procedures. Here that work is extended by (1) introducing a different estimation procedure based on the EM algorithm, and (2) formulating estimation techniques for second-order autoregressive models. The second development is important because some of the intended areas of application (adult pulmonary function decline, childhood blood pressure) have autocorrelation functions which decay more slowly than the geometric rate imposed by an AR1 model. Simulation studies are used to compare the three methodologies (non-linear, EM based and complete case) with respect to bias, efficiency and coverage both in the presence and in the absence of time-varying covariates. Differing degrees and mechanisms of missingness are examined. Preliminary results indicate the non-linear approach to be the method of choice: it has high efficiency and is easily implemented. An illustrative example concerning pulmonary function decline in the Netherlands is analyzed using this method.  相似文献   

13.
Evidence‐based quantitative methodologies have been proposed to inform decision‐making in drug development, such as metrics to make go/no‐go decisions or predictions of success, identified with statistical significance of future clinical trials. While these methodologies appropriately address some critical questions on the potential of a drug, they either consider the past evidence without predicting the outcome of the future trials or focus only on efficacy, failing to account for the multifaceted aspects of a successful drug development. As quantitative benefit‐risk assessments could enhance decision‐making, we propose a more comprehensive approach using a composite definition of success based not only on the statistical significance of the treatment effect on the primary endpoint but also on its clinical relevance and on a favorable benefit‐risk balance in the next pivotal studies. For one drug, we can thus study several development strategies before starting the pivotal trials by comparing their predictive probability of success. The predictions are based on the available evidence from the previous trials, to which new hypotheses on the future development could be added. The resulting predictive probability of composite success provides a useful summary to support the discussions of the decision‐makers. We present a fictive, but realistic, example in major depressive disorder inspired by a real decision‐making case.  相似文献   

14.
In this article, we perform Bayesian estimation of stochastic volatility models with heavy tail distributions using Metropolis adjusted Langevin (MALA) and Riemman manifold Langevin (MMALA) methods. We provide analytical expressions for the application of these methods, assess the performance of these methodologies in simulated data, and illustrate their use on two financial time series datasets.  相似文献   

15.
In many clinical research applications the time to occurrence of one event of interest, that may be obscured by another??so called competing??event, is investigated. Specific interventions can only have an effect on the endpoint they address or research questions might focus on risk factors for a certain outcome. Different approaches for the analysis of time-to-event data in the presence of competing risks were introduced in the last decades including some new methodologies, which are not yet frequently used in the analysis of competing risks data. Cause-specific hazard regression, subdistribution hazard regression, mixture models, vertical modelling and the analysis of time-to-event data based on pseudo-observations are described in this article and are applied to a dataset of a cohort study intended to establish risk stratification for cardiac death after myocardial infarction. Data analysts are encouraged to use the appropriate methods for their specific research questions by comparing different regression approaches in the competing risks setting regarding assumptions, methodology and interpretation of the results. Notes on application of the mentioned methods using the statistical software R are presented and extensions to the presented standard methods proposed in statistical literature are mentioned.  相似文献   

16.
This article considers the adaptive lasso procedure for the accelerated failure time model with multiple covariates based on weighted least squares method, which uses Kaplan-Meier weights to account for censoring. The adaptive lasso method can complete the variable selection and model estimation simultaneously. Under some mild conditions, the estimator is shown to have sparse and oracle properties. We use Bayesian Information Criterion (BIC) for tuning parameter selection, and a bootstrap variance approach for standard error. Simulation studies and two real data examples are carried out to investigate the performance of the proposed method.  相似文献   

17.
Dependent multivariate count data occur in several research studies. These data can be modelled by a multivariate Poisson or Negative binomial distribution constructed using copulas. However, when some of the counts are inflated, that is, the number of observations in some cells are much larger than other cells, then the copula-based multivariate Poisson (or Negative binomial) distribution may not fit well and it is not an appropriate statistical model for the data. There is a need to modify or adjust the multivariate distribution to account for the inflated frequencies. In this article, we consider the situation where the frequencies of two cells are higher compared to the other cells and develop a doubly inflated multivariate Poisson distribution function using multivariate Gaussian copula. We also discuss procedures for regression on covariates for the doubly inflated multivariate count data. For illustrating the proposed methodologies, we present real data containing bivariate count observations with inflations in two cells. Several models and linear predictors with log link functions are considered, and we discuss maximum likelihood estimation to estimate unknown parameters of the models.  相似文献   

18.
For a two-parameter negative exponential population with both parameters unknown, the bounded risk sequential estimation problem of the location parameter is considered under an asymmetric linex loss funmction. Asymptotic second-order expansion of the risk function is derived for a general class of stopping variables. Some examples are include involving purely scquential and accelerated sequential sampling methodologies. A Monte-Carlo study is carried out to support the asymptotic results and to compare the performance of the different sampling methodologies.  相似文献   

19.
Longitudinal studies occcur frequently in many different disciplines. To fully utilize the potential value of the information contained in a longitudinal data, various multivariate linear models have been proposed. The methodology and analysis are somewhat unique in their own ways and their relationships are not well understood and presented. This article describes a general multivaritate linear model for longitudinal data and attempts to provide a constructive formulation of the components in the mean response profile. The objective is to point out the extension and connections of some well-known models that have been obscured by different areas of application. More imporiantly, the model is expressed in a unified regression form from the subject matter considerations. Such an approach is simpler and more intuitive than other ways to modeling and parameter estimation. As a cmsequeace the analyses of the general class cf models for longitudional data can be casily implemented with standard software.  相似文献   

20.
《统计学通讯:理论与方法》2012,41(16-17):3259-3277
Real data may expose a larger (or smaller) variability than assumed in an exponential family modeling, the basis of Generalized linear models and additive models. To analyze such data, smooth estimation of the mean and the dispersion function has been introduced in extended generalized additive models using P-splines techniques. This methodology is further explored here by allowing for the modeling of some of the covariates parametrically and some nonparametrically. The main contribution in this article is a simulation study investigating the finite-sample performance of the P-spline estimation technique in these extended models, including comparisons with a standard generalized additive modeling approach, as well as with a hierarchical modeling approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号