This paper focuses on the use of strategic planning among small and medium sized enterprises (SMEs) in the UK manufacturing sector. It analyses the relationship between the intensity of strategic planning, business objectives, perceived performance, changes in the business environment and the use of capital budgeting techniques. Capital budgeting is of particular interest as an area of investigation, and is one which has seldom featured in previous studies of strategic planning behaviour. These issues were investigated via a survey of UK manufacturing SMEs carried out in the winter of 1996/97.
The key results suggest that SMEs incorporate a range of objectives into their strategic planning process, with profit improvement perceived to be the most important objective, followed by sales growth. SMEs engaged in detailed strategic planning are more likely to use formal capital budgeting techniques, including the net present value method, which is consistent with maximising the companys' value. Perceived profitability and success in achieving organisational objectives were positively associated with planning detail, suggesting that strategic planning is a key component improving performance. Planning detail was also associated with a significantly higher level of perceived change in the business environment. 相似文献
This paper surveys the fundamental principles of subjective Bayesian inference in econometrics and the implementation of those principles using posterior simulation methods. The emphasis is on the combination of models and the development of predictive distributions. Moving beyond conditioning on a fixed number of completely specified models, the paper introduces subjective Bayesian tools for formal comparison of these models with as yet incompletely specified models. The paper then shows how posterior simulators can facilitate communication between investigators (for example, econometricians) on the one hand and remote clients (for example, decision makers) on the other, enabling clients to vary the prior distributions and functions of interest employed by investigators. A theme of the paper is the practicality of subjective Bayesian methods. To this end, the paper describes publicly available software for Bayesian inference, model development, and communication and provides illustrations using two simple econometric models. 相似文献
We show the second-order relative accuracy, on bounded sets, of the Studentized bootstrap, exponentially tilted bootstrap and nonparametric likelihood tilted bootstrap, for means and smooth functions of means. We also consider the relative errors for larger deviations. Our method exploits certain connections between Edgeworth and saddlepoint approximations to simplify the computations. 相似文献
The quasilikelihood estimator is widely used in data analysis where a likelihood is not available. We illustrate that with a given variance function it is not only conservative, in minimizing a maximum risk, but also robust against a possible misspecification of either the likelihood or cumulants of the model. In examples it is compared with estimators based on maximum likelihood and quadratic estimating functions. 相似文献
Weextend Wei and Tanner's (1991) multiple imputation approach insemi-parametric linear regression for univariate censored datato clustered censored data. The main idea is to iterate the followingtwo steps: 1) using the data augmentation to impute for censoredfailure times; 2) fitting a linear model with imputed completedata, which takes into consideration of clustering among failuretimes. In particular, we propose using the generalized estimatingequations (GEE) or a linear mixed-effects model to implementthe second step. Through simulation studies our proposal comparesfavorably to the independence approach (Lee et al., 1993), whichignores the within-cluster correlation in estimating the regressioncoefficient. Our proposal is easy to implement by using existingsoftwares. 相似文献
Kolassa and Tanner (J. Am. Stat. Assoc. (1994) 89, 697–702) present the Gibbs-Skovgaard algorithm for approximate conditional inference. Kolassa (Ann Statist. (1999), 27, 129–142) gives conditions under which their Markov chain is known to converge. This paper calculates explicity bounds on convergence rates in terms calculable directly from chain transition operators. These results are useful in cases like those considered by Kolassa (1999). 相似文献
Testing for homogeneity in finite mixture models has been investigated by many researchers. The asymptotic null distribution of the likelihood ratio test (LRT) is very complex and difficult to use in practice. We propose a modified LRT for homogeneity in finite mixture models with a general parametric kernel distribution family. The modified LRT has a χ-type of null limiting distribution and is asymptotically most powerful under local alternatives. Simulations show that it performs better than competing tests. They also reveal that the limiting distribution with some adjustment can satisfactorily approximate the quantiles of the test statistic, even for moderate sample sizes. 相似文献
A substantial degree of uncertainty exists surrounding the reconstruction of events based on memory recall. This form of measurement error affects the performance of structured interviews such as the Composite International Diagnostic Interview (CIDI), an important tool to assess mental health in the community. Measurement error probably explains the discrepancy in estimates between longitudinal studies with repeated assessments (the gold-standard), yielding approximately constant rates of depression, versus cross-sectional studies which often find increasing rates closer in time to the interview. Repeated assessments of current status (or recent history) are more reliable than reconstruction of a person's psychiatric history based on a single interview. In this paper, we demonstrate a method of estimating a time-varying measurement error distribution in the age of onset of an initial depressive episode, as diagnosed by the CIDI, based on an assumption regarding age-specific incidence rates. High-dimensional non-parametric estimation is achieved by the EM-algorithm with smoothing. The method is applied to data from a Norwegian mental health survey in 2000. The measurement error distribution changes dramatically from 1980 to 2000, with increasing variance and greater bias further away in time from the interview. Some influence of the measurement error on already published results is found. 相似文献
We develop strategies for Bayesian modelling as well as model comparison, averaging and selection for compartmental models with particular emphasis on those that occur in the analysis of positron emission tomography (PET) data. Both modelling and computational issues are considered. Biophysically inspired informative priors are developed for the problem at hand, and by comparison with default vague priors it is shown that the proposed modelling is not overly sensitive to prior specification. It is also shown that an additive normal error structure does not describe measured PET data well, despite being very widely used, and that within a simple Bayesian framework simultaneous parameter estimation and model comparison can be performed with a more general noise model. The proposed approach is compared with standard techniques using both simulated and real data. In addition to good, robust estimation performance, the proposed technique provides, automatically, a characterisation of the uncertainty in the resulting estimates which can be considerable in applications such as PET. 相似文献
Multivariate nonparametric smoothers, such as kernel based smoothers and thin plate splines smoothers, are adversely impacted by the sparseness of data in high dimension, also known as the curse of dimensionality. Adaptive smoothers, that can exploit the underlying smoothness of the regression function, may partially mitigate this effect. This paper presents a comparative simulation study of a novel adaptive smoother (IBR) with competing multivariate smoothers available as package or function within the R language and environment for statistical computing. Comparison between the methods are made on simulated datasets of moderate size, from 50 to 200 observations, with two, five or 10 potential explanatory variables, and on a real dataset. The results show that the good asymptotic properties of IBR are complemented by a very good behavior on moderate sized datasets, results which are similar to those obtained with Duchon low rank splines. 相似文献