首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We study the maxiset performance of a large collection of block thresholding wavelet estimators, namely the horizontal block thresholding family. We provide sufficient conditions on the choices of rates and threshold values to ensure that the involved adaptive estimators obtain large maxisets. Moreover, we prove that any estimator of such a family reconstructs the Besov balls with a near‐minimax optimal rate that can be faster than the one of any separable thresholding estimator. Then, we identify, in particular cases, the best estimator of such a family, that is, the one associated with the largest maxiset. As a particularity of this paper, we propose a refined approach that models method‐dependent threshold values. By a series of simulation studies, we confirm the good performance of the best estimator by comparing it with the other members of its family.  相似文献   

2.
Dead recoveries of marked animals are commonly used to estimate survival probabilities. Band‐recovery models can be parameterized either by r (the probability of recovering a band conditional on death of the animal) or by f (the probability that an animal will be killed, retrieved, and have its band reported). The T parametrization can be implemented in a capture‐recapture framework with two states (alive and newly dead), mortality being the transition probability between the two states. The authors show here that the f parametrization can also be implemented in a multistate framework by imposing simple constraints on some parameters. They illustrate it using data on the mallard and the snow goose. However, they mention that because it does not entirely separate the individual survival and encounter processes, the f parametrization must be used with care on reduced models, or in the presence of estimates at the boundary of the parameter space. As they show, a multistate framework allows the use of powerful software for model fitting or testing the goodness‐of‐fit of models; it also affords the implementation of complex models such as those based on mixture of information or uncertain states  相似文献   

3.
In this paper, we consider the problem of adaptive density or survival function estimation in an additive model defined by Z=X+Y with X independent of Y, when both random variables are non‐negative. This model is relevant, for instance, in reliability fields where we are interested in the failure time of a certain material that cannot be isolated from the system it belongs. Our goal is to recover the distribution of X (density or survival function) through n observations of Z, assuming that the distribution of Y is known. This issue can be seen as the classical statistical problem of deconvolution that has been tackled in many cases using Fourier‐type approaches. Nonetheless, in the present case, the random variables have the particularity to be supported. Knowing that, we propose a new angle of attack by building a projection estimator with an appropriate Laguerre basis. We present upper bounds on the mean squared integrated risk of our density and survival function estimators. We then describe a non‐parametric data‐driven strategy for selecting a relevant projection space. The procedures are illustrated with simulated data and compared with the performances of a more classical deconvolution setting using a Fourier approach. Our procedure achieves faster convergence rates than Fourier methods for estimating these functions.  相似文献   

4.
基于压力复杂构成学说,计算机技术压力的构成有待进一步验证。通过回顾计算机技术压力这一概念的发展,在已有研究结论的支持下,提出了计算机技术压力二阶双因子构成模型。利用结构方程模型分析工具对两个可替代模型进行比较,结果发现二阶双因子构成模型能够更好地反映计算机技术压力。这为后续研究计算机技术压力与个人和组织的关系提供了新的视角,同时也完成了计算机技术压力量表从探索性开发到验证性开发的完整过程。  相似文献   

5.
This article presents a Bayesian latent variable model used to analyze ordinal response survey data by taking into account the characteristics of respondents. The ordinal response data are viewed as multivariate responses arising from continuous latent variables with known cut-points. Each respondent is characterized by two parameters that have a Dirichlet process as their joint prior distribution. The proposed mechanism adjusts for classes of personalities. The model is applied to student survey data in course evaluations. Goodness-of-fit (GoF) procedures are developed for assessing the validity of the model. The proposed GoF procedures are simple, intuitive, and do not seem to be a part of current Bayesian practice.  相似文献   

6.
Categorical analysis of variance (CATANOVA) is a statistical method designed to analyse variability between treatments of interest to the researcher. There are well-established links between CATANOVA and the τ statistic of Goodman and Kruskal which, for the purpose of the graphical identification of this variation, is partitioned using singular value decomposition for Non-Symmetrical Correspondence Analysis (NSCA) (D'Ambra & Lauro, 1989). The aim of this paper is to show a decomposition of the Between Sum of Squares (BSS), measured both in CATANOVA framework and in the statistic τ, into location, dispersion and higher order components. This decomposition has been developed using Emerson's orthogonal polynomials. Starting from this decomposition, a statistical test and a confidence circle have been calculated for each component and for each modality in which the BSS was decomposed, respectively. A Customer Satisfaction study has been considered to explain the methodology.  相似文献   

7.
The Cox‐Aalen model, obtained by replacing the baseline hazard function in the well‐known Cox model with a covariate‐dependent Aalen model, allows for both fixed and dynamic covariate effects. In this paper, we examine maximum likelihood estimation for a Cox‐Aalen model based on interval‐censored failure times with fixed covariates. The resulting estimator globally converges to the truth slower than the parametric rate, but its finite‐dimensional component is asymptotically efficient. Numerical studies show that estimation via a constrained Newton method performs well in terms of both finite sample properties and processing time for moderate‐to‐large samples with few covariates. We conclude with an application of the proposed methods to assess risk factors for disease progression in psoriatic arthritis.  相似文献   

8.
In this paper, we study the problem of testing the hypothesis on whether the density f of a random variable on a sphere belongs to a given parametric class of densities. We propose two test statistics based on the L2 and L1 distances between a non‐parametric density estimator adapted to circular data and a smoothed version of the specified density. The asymptotic distribution of the L2 test statistic is provided under the null hypothesis and contiguous alternatives. We also consider a bootstrap method to approximate the distribution of both test statistics. Through a simulation study, we explore the moderate sample performance of the proposed tests under the null hypothesis and under different alternatives. Finally, the procedure is illustrated by analysing a real data set based on wind direction measurements.  相似文献   

9.
Abstract. This article deals with two problems concering the probabilities of causation defined by Pearl (Causality: models, reasoning, and inference, 2nd edn, 2009, Cambridge University Press, New York) namely, the probability that one observed event was a necessary (or sufficient, or both) cause of another; one is to derive new bounds, and the other is to provide the covariate selection criteria. Tian & Pearl (Ann. Math. Artif. Intell., 28, 2000, 287–313) showed how to bound the probabilities of causation using information from experimental and observational studies, with minimal assumptions about the data‐generating process, and identifiable conditions for these probabilities. In this article, we derive narrower bounds using covariate information that is available from those studies. In addition, we propose the conditional monotonicity assumption so as to further narrow the bounds. Moreover, we discuss the covariate selection problem from the viewpoint of the estimation accuracy, and show that selecting a covariate that has a direct effect on an outcome variable cannot always improve the estimation accuracy, which is contrary to the situation in linear regression models. These results provide more accurate information for public policy, legal determination of responsibility and personal decision making.  相似文献   

10.

When analyzing categorical data using loglinear models in sparse contingency tables, asymptotic results may fail. In this paper the empirical properties of three commonly used asymptotic tests of independence, based on the uniform association model for ordinal data, are investigated by means of Monte Carlo simulation. Five different bootstrapped tests of independence are presented and compared to the asymptotic tests. The comparisons are made with respect to both size and power properties of the tests. Results indicate that the asymptotic tests have poor size control. The test based on the estimated association parameter is severely conservative and the two chi-squared tests (Pearson, likelihood-ratio) are both liberal. The bootstrap tests that either use a parametric assumption or are based on non-pivotal test statistics do not perform better than the asymptotic tests in all situations. The bootstrap tests that are based on approximately pivotal statistics provide both adjustment of size and enhancement of power. These tests are therefore recommended for use in situations similar to those included in the simulation study.  相似文献   

11.
Investigators often gather longitudinal data to assess changes in responses over time within subjects and to relate these changes to within‐subject changes in predictors. Missing data are common in such studies and predictors can be correlated with subject‐specific effects. Maximum likelihood methods for generalized linear mixed models provide consistent estimates when the data are ‘missing at random’ (MAR) but can produce inconsistent estimates in settings where the random effects are correlated with one of the predictors. On the other hand, conditional maximum likelihood methods (and closely related maximum likelihood methods that partition covariates into between‐ and within‐cluster components) provide consistent estimation when random effects are correlated with predictors but can produce inconsistent covariate effect estimates when data are MAR. Using theory, simulation studies, and fits to example data this paper shows that decomposition methods using complete covariate information produce consistent estimates. In some practical cases these methods, that ostensibly require complete covariate information, actually only involve the observed covariates. These results offer an easy‐to‐use approach to simultaneously protect against bias from both cluster‐level confounding and MAR missingness in assessments of change.  相似文献   

12.
Probabilistic matching of records is widely used to create linked data sets for use in health science, epidemiological, economic, demographic and sociological research. Clearly, this type of matching can lead to linkage errors, which in turn can lead to bias and increased variability when standard statistical estimation techniques are used with the linked data. In this paper we develop unbiased regression parameter estimates to be used when fitting a linear model with nested errors to probabilistically linked data. Since estimation of variance components is typically an important objective when fitting such a model, we also develop appropriate modifications to standard methods of variance components estimation in order to account for linkage error. In particular, we focus on three widely used methods of variance components estimation: analysis of variance, maximum likelihood and restricted maximum likelihood. Simulation results show that our estimators perform reasonably well when compared to standard estimation methods that ignore linkage errors.  相似文献   

13.
We consider estimation in the single‐index model where the link function is monotone. For this model, a profile least‐squares estimator has been proposed to estimate the unknown link function and index. Although it is natural to propose this procedure, it is still unknown whether it produces index estimates that converge at the parametric rate. We show that this holds if we solve a score equation corresponding to this least‐squares problem. Using a Lagrangian formulation, we show how one can solve this score equation without any reparametrization. This makes it easy to solve the score equations in high dimensions. We also compare our method with the effective dimension reduction and the penalized least‐squares estimator methods, both available on CRAN as R packages, and compare with link‐free methods, where the covariates are elliptically symmetric.  相似文献   

14.
In this paper we present methods for inference on data selected by a complex sampling design for a class of statistical models for the analysis of ordinal variables. Specifically, assuming that the sampling scheme is not ignorable, we derive for the class of cub models (Combination of discrete Uniform and shifted Binomial distributions) variance estimates for a complex two stage stratified sample. Both Taylor linearization and repeated replication variance estimators are presented. We also provide design‐based test diagnostics and goodness‐of‐fit measures. We illustrate by means of real data analysis the differences between survey‐weighted and unweighted point estimates and inferences for cub model parameters.  相似文献   

15.
There are many methods for analyzing longitudinal ordinal response data with random dropout. These include maximum likelihood (ML), weighted estimating equations (WEEs), and multiple imputations (MI). In this article, using a Markov model where the effect of previous response on the current response is investigated as an ordinal variable, the likelihood is partitioned to simplify the use of existing software. Simulated data, generated to present a three-period longitudinal study with random dropout, are used to compare performance of ML, WEE, and MI methods in terms of standardized bias and coverage probabilities. These estimation methods are applied to a real medical data set.  相似文献   

16.
Abstract. We propose a spline‐based semiparametric maximum likelihood approach to analysing the Cox model with interval‐censored data. With this approach, the baseline cumulative hazard function is approximated by a monotone B‐spline function. We extend the generalized Rosen algorithm to compute the maximum likelihood estimate. We show that the estimator of the regression parameter is asymptotically normal and semiparametrically efficient, although the estimator of the baseline cumulative hazard function converges at a rate slower than root‐n. We also develop an easy‐to‐implement method for consistently estimating the standard error of the estimated regression parameter, which facilitates the proposed inference procedure for the Cox model with interval‐censored data. The proposed method is evaluated by simulation studies regarding its finite sample performance and is illustrated using data from a breast cosmesis study.  相似文献   

17.
We present a novel methodology for estimating the parameters of a finite mixture model (FMM) based on partially rank‐ordered set (PROS) sampling and use it in a fishery application. A PROS sampling design first selects a simple random sample of fish and creates partially rank‐ordered judgement subsets by dividing units into subsets of prespecified sizes. The final measurements are then obtained from these partially ordered judgement subsets. The traditional expectation–maximization algorithm is not directly applicable for these observations. We propose a suitable expectation–maximization algorithm to estimate the parameters of the FMMs based on PROS samples. We also study the problem of classification of the PROS sample into the components of the FMM. We show that the maximum likelihood estimators based on PROS samples perform substantially better than their simple random sample counterparts even with small samples. The results are used to classify a fish population using the length‐frequency data.  相似文献   

18.
This paper derives estimating equations for modelling circular data with longitudinal structure for a family of circular distributions with two parameters. Estimating equations for modelling the circular mean and the resultant length are given separately. Estimating equations are then derived for a mixed model. This paper shows that the estimators that follow from these equations are consistent and asymptotically normal. The results are illustrated by an example about the direction taken by homing pigeons.  相似文献   

19.
Remote sensing of the earth with satellites yields datasets that can be massive in size, nonstationary in space, and non‐Gaussian in distribution. To overcome computational challenges, we use the reduced‐rank spatial random effects (SRE) model in a statistical analysis of cloud‐mask data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on board NASA's Terra satellite. Parameterisations of cloud processes are the biggest source of uncertainty and sensitivity in different climate models’ future projections of Earth's climate. An accurate quantification of the spatial distribution of clouds, as well as a rigorously estimated pixel‐scale clear‐sky‐probability process, is needed to establish reliable estimates of cloud‐distributional changes and trends caused by climate change. Here we give a hierarchical spatial‐statistical modelling approach for a very large spatial dataset of 2.75 million pixels, corresponding to a granule of MODIS cloud‐mask data, and we use spatial change‐of‐Support relationships to estimate cloud fraction at coarser resolutions. Our model is non‐Gaussian; it postulates a hidden process for the clear‐sky probability that makes use of the SRE model, EM‐estimation, and optimal (empirical Bayes) spatial prediction of the clear‐sky‐probability process. Measures of prediction uncertainty are also given.  相似文献   

20.
The estimation of a multivariate function from a stationary m-dependent process is investigated, with a special focus on the case where m is large or unbounded. We develop an adaptive estimator based on wavelet methods. Under flexible assumptions on the nonparametric model, we prove the good performances of our estimator by determining sharp rates of convergence under two kinds of errors: the pointwise mean squared error and the mean integrated squared error. We illustrate our theoretical result by considering the multivariate density estimation problem, the derivatives density estimation problem, the density estimation problem in a GARCH-type model and the multivariate regression function estimation problem. The performance of proposed estimator has been shown by a numerical study for a simulated and real data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号