首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   34篇
  免费   1篇
管理学   5篇
综合类   3篇
社会学   2篇
统计学   25篇
  2023年   1篇
  2019年   1篇
  2018年   2篇
  2017年   3篇
  2016年   1篇
  2015年   2篇
  2014年   1篇
  2013年   5篇
  2012年   4篇
  2011年   2篇
  2010年   1篇
  2009年   1篇
  2008年   1篇
  2007年   2篇
  2004年   1篇
  2002年   1篇
  2001年   1篇
  1999年   1篇
  1998年   2篇
  1994年   1篇
  1993年   1篇
排序方式: 共有35条查询结果,搜索用时 843 毫秒
21.
Spatial econometric models estimated on the big geo-located point data have at least two problems: limited computational capabilities and inefficient forecasting for the new out-of-sample geo-points. This is because of spatial weights matrix W defined for in-sample observations only and the computational complexity. Machine learning models suffer the same when using kriging for predictions; thus this problem still remains unsolved. The paper presents a novel methodology for estimating spatial models on big data and predicting in new locations. The approach uses bootstrap and tessellation to calibrate both model and space. The best bootstrapped model is selected with the PAM (Partitioning Around Medoids) algorithm by classifying the regression coefficients jointly in a nonindependent manner. Voronoi polygons for the geo-points used in the best model allow for a representative space division. New out-of-sample points are assigned to tessellation tiles and linked to the spatial weights matrix as a replacement for an original point what makes feasible usage of calibrated spatial models as a forecasting tool for new locations. There is no trade-off between forecast quality and computational efficiency in this approach. An empirical example illustrates a model for business locations and firms' profitability.  相似文献   
22.
Identifying mediators in variable chains as part of a causal mediation analysis can shed light on issues of causation, assessment, and intervention. However, coefficients and effect sizes in a causal mediation analysis are nearly always small. This can lead those less familiar with the approach to reject the results of causal mediation analysis. The current paper highlights five factors that contribute to small path coefficients in mediation research: loss of information when measuring relationships across time, controlling for prior levels of a predicted variable, adding control variables to the analysis, ignoring measurement error in one’s variables, and using multiple mediators. It is argued that these issues are best handled by increasing the statistical power of the analysis, identifying the optimal temporal interval between variables, using bootstrapped confidence intervals to analyze the results, and finding alternate ways of assessing the meaningfulness of the indirect effect.  相似文献   
23.
Process capability indices (PCIs) are tools widely used by the industries to determine the quality of their products and the performance of their manufacturing processes. Classic versions of these indices were constructed for processes whose quality characteristics have a normal distribution. In practice, many of these characteristics do not follow this distribution. In such a case, the classic PCIs must be modified to take into account the non-normality. Ignoring the effect of this non-normality can lead to misinterpretation of the process capability and ill-advised business decisions. An asymmetric non-normal model that is receiving considerable attention due to its good properties is the Birnbaum–Saunders (BS) distribution. We propose, develop, implement and apply a methodology based on PCIs for BS processes considering estimation, parametric inference, bootstrap and optimization tools. This methodology is implemented in the statistical software {\tt R}. A simulation study is conducted to evaluate its performance. Real-world case studies with applications for three data sets are carried out to illustrate its potentiality. One of these data sets was already published and is associated with the electronic industry, whereas the other two are unpublished and associated with the food industry.  相似文献   
24.
This article investigates the impact of multivariate generalized autoregressive conditional heteroskedastic (GARCH) errors on hypothesis testing for cointegrating vectors. The study reviews a cointegrated vector autoregressive model incorporating multivariate GARCH innovations and a regularity condition required for valid asymptotic inferences. Monte Carlo experiments are then conducted on a test statistic for a hypothesis on the cointegrating vectors. The experiments demonstrate that the regularity condition plays a critical role in rendering the hypothesis testing operational. It is also shown that Bartlett-type correction and wild bootstrap are useful in improving the small-sample size and power performance of the test statistic of interest.  相似文献   
25.
Schulz  Terry W.  Griffin  Susan 《Risk analysis》1999,19(4):577-584
The U.S. Environmental Protection Agency (EPA) recommends the use of the one-sided 95% upper confidence limit of the arithmetic mean based on either a normal or lognormal distribution for the contaminant (or exposure point) concentration term in the Superfund risk assessment process. When the data are not normal or lognormal this recommended approach may overestimate the exposure point concentration (EPC) and may lead to unecessary cleanup at a hazardous waste site. The EPA concentration term only seems to perform like alternative EPC methods when the data are well fit by a lognormal distribution. Several alternative methods for calculating the EPC are investigated and compared using soil data collected from three hazardous waste sites in Montana, Utah, and Colorado. For data sets that are well fit by a lognormal distribution, values for the Chebychev inequality or the EPA concentration term may be appropriate EPCs. For data sets where the soil concentration data are well fit by gamma distributions, Wong's method may be used for calculating EPCs. The studentized bootstrap-t and Hall's bootstrap-t transformation are recommended for EPC calculation when all distribution fits are poor. If a data set is well fit by a distribution, parametric bootstrap may provide a suitable EPC.  相似文献   
26.
Achieving consistency of growth pattern for commercial yeast fermentation over batches through addition of water, molasses and other chemicals is often very complex in nature due to its bio-chemical reactions in operation. Regression models in statistical methods play a very important role in modeling the underlying mechanism, provided it is known. On the contrary, artificial neural networks provide a wide class of general-purpose, flexible non-linear architectures to explain any complex industrial processes. In this paper, an attempt has been made to find a robust control system for a time varying yeast fermentation process through statistical means, and in comparison to non-parametric neural network techniques. The data used in this context are obtained from an industry producing baker's yeast through a fed-batch fermentation process. The model accuracy for predicting the growth pattern of commercial yeast, when compared among the various techniques used, reveals the best performance capability with the backpropagation neural network. The statistical model used through projection pursuit regression also shows higher prediction accuracy. The models, thus developed, would also help to find an optimum combination of parameters for minimizing the variability of yeast production.  相似文献   
27.
We consider estimation of the unknown parameters of Chen distribution [Chen Z. A new two-parameter lifetime distribution with bathtub shape or increasing failure rate function. Statist Probab Lett. 2000;49:155–161] with bathtub shape using progressive-censored samples. We obtain maximum likelihood estimates by making use of an expectation–maximization algorithm. Different Bayes estimates are derived under squared error and balanced squared error loss functions. It is observed that the associated posterior distribution appears in an intractable form. So we have used an approximation method to compute these estimates. A Metropolis–Hasting algorithm is also proposed and some more approximate Bayes estimates are obtained. Asymptotic confidence interval is constructed using observed Fisher information matrix. Bootstrap intervals are proposed as well. Sample generated from MH algorithm are further used in the construction of HPD intervals. Finally, we have obtained prediction intervals and estimates for future observations in one- and two-sample situations. A numerical study is conducted to compare the performance of proposed methods using simulations. Finally, we analyse real data sets for illustration purposes.  相似文献   
28.
Ragin’s Qualitative Comparative Analysis (QCA) is often used with small to medium samples where the researcher has good case knowledge. Employing it to analyse large survey datasets, without in-depth case knowledge, raises new challenges. We present ways of addressing these challenges. We first report a single QCA result from a configurational analysis of the British National Child Development Study dataset (highest educational qualification as a set theoretic function of social class, sex and ability). We then address the robustness of our analysis by employing Du?a and Thiem’s R QCA package to explore the consequences of (i) changing fuzzy set theoretic calibrations of ability, (ii) simulating errors in measuring ability and (iii) changing thresholds for assessing the quasi-sufficiency of causal configurations for educational achievement. We also consider how the analysis behaves under simulated re-sampling, using bootstrapping. The paper offers suggested methods to others wishing to use QCA with large n data.  相似文献   
29.
In this paper we consider and propose some confidence intervals for estimating the mean or difference of means of skewed populations. We extend the median t interval to the two sample problem. Further, we suggest using the bootstrap to find the critical points for use in the calculation of median t intervals. A simulation study has been made to compare the performance of the intervals and a real life example has been considered to illustrate the application of the methods.  相似文献   
30.
Various methods exist to calculate confidence intervals for the benchmark dose in risk analysis. This study compares the performance of three such methods in fitting nonlinear dose-response models: the delta method, the likelihood-ratio method, and the bootstrap method. A data set from a developmental toxicity test with continuous, ordinal, and quantal dose-response data is used for the comparison of these methods. Nonlinear dose-response models, with various shapes, were fitted to these data. The results indicate that a few thousand runs are generally needed to get stable confidence limits when using the bootstrap method. Further, the bootstrap and the likelihood-ratio method were found to give fairly similar results. The delta method, however, resulted in some cases in different (usually narrower) intervals, and appears unreliable for nonlinear dose-response models. Since the bootstrap method is more time consuming than the likelihood-ratio method, the latter is more attractive for routine dose-response analysis. In the context of a probabilistic risk assessment the bootstrap method has the advantage that it directly links to Monte Carlo analysis.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号