首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Abstract

Indirect approaches based on minimal path vectors (d-MPs) and/or minimal cut vectors (d-MCs) are reported to be efficient for the reliability evaluation of multistate networks. Given the need to find more efficient evaluation methods for exact reliability, such techniques may still be cumbersome when the size of the network and the states of component are relatively large. Alternatively, computing reliability bounds can provide approximated reliability with less computational effort. Based on Bai’s exact and indirect reliability evaluation algorithm, an improved algorithm is proposed in this study, which provides sequences of upper and lower reliability bounds of multistate networks. Novel heuristic rules with a pre-specified value to filter less important sets of unspecified states are then developed and incorporated into the algorithm. Computational experiments comparing the proposed methods with an existing direct bounding algorithm show that the new algorithms can provide tight reliability bounds with less computational effort, especially for the proposed algorithm with heuristic L1.  相似文献   

2.
Abstract

Librarians are charged with providing effective library instruction (i.e., how to utilize the library, how to conduct research, etc.) while also effectively teaching library users information literacy skills (how to interpret and evaluate the information they are accessing). Challenges that librarians face when trying to accomplish these tasks may include time constraints in the classroom, students who are not tech savvy, students who don’t understand library terminology, and students who don’t understand information literacy as a whole. The purpose of this article is to briefly describe scenarios that librarians could encounter when both teaching library instruction as well as teaching information literacy—scenarios that could hinder the learning processes for students and library users. Tips and suggestions are provided that can help librarians assess the diverse makeup of the class attendees to tailor their instruction session. Tips and suggestions are also provided to help librarians better engage students and users during library instruction and information literacy instruction.  相似文献   

3.
In this paper we consider the problem of estimating the locations of several normal populations when an order relation between them is known to be true. We compare the maximum likelihood estimator, the M-estimators based on Huber’s ψ function, a robust weighted likelihood estimator, the Gastworth estimator and the trimmed mean estimator. A Monte-Carlo study illustrates the performance of the methods considered.  相似文献   

4.
Pretest–posttest studies are an important and popular method for assessing the effectiveness of a treatment or an intervention in many scientific fields. While the treatment effect, measured as the difference between the two mean responses, is of primary interest, testing the difference of the two distribution functions for the treatment and the control groups is also an important problem. The Mann–Whitney test has been a standard tool for testing the difference of distribution functions with two independent samples. We develop empirical likelihood-based (EL) methods for the Mann–Whitney test to incorporate the two unique features of pretest–posttest studies: (i) the availability of baseline information for both groups; and (ii) the structure of the data with missing by design. Our proposed methods combine the standard Mann–Whitney test with the EL method of Huang, Qin and Follmann [(2008), ‘Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest–Posttest Study’, Journal of the American Statistical Association, 103(483), 1270–1280], the imputation-based empirical likelihood method of Chen, Wu and Thompson [(2015), ‘An Imputation-Based Empirical Likelihood Approach to Pretest–Posttest Studies’, The Canadian Journal of Statistics accepted for publication], and the jackknife empirical likelihood method of Jing, Yuan and Zhou [(2009), ‘Jackknife Empirical Likelihood’, Journal of the American Statistical Association, 104, 1224–1232]. Theoretical results are presented and finite sample performances of proposed methods are evaluated through simulation studies.  相似文献   

5.
Abstract

In this article, customers’ strategic behavior and social optimation in a constant retrial queue with setup time and the N-policy are investigated. Customers who find the server isn’t idle either leave forever or enter an orbit. After a service, the server will seek a customer from the orbit at a constant rate. The server is closed whenever the system becomes empty, and is activated when the number of waitlisted customers reaches a threshold. We obtain the equilibrium arrival rates in different states. There exist both Follow-the-Crowd (FTC) and Avoid-the-Crowd (ATC) behaviors. Through the Particle Swarm Optimization (PSO) algorithm, we numerically obtain the optimal solution of the social welfare maximization problem. Finally, numerical examples are presented to illustrate the sensitivity of system performance measures.  相似文献   

6.
A network cluster is defined as a set of nodes with ‘strong’ within group ties and ‘weak’ between group ties. Most clustering methods focus on finding groups of ‘densely connected’ nodes, where the dyad (or tie between two nodes) serves as the building block for forming clusters. However, since the unweighted dyad cannot distinguish strong relationships from weak ones, it then seems reasonable to consider an alternative building block, i.e. one involving more than two nodes. In the simplest case, one can consider the triad (or three nodes), where the fully connected triad represents the basic unit of transitivity in an undirected network. In this effort we propose a clustering framework for finding highly transitive subgraphs in an undirected/unweighted network, where the fully connected triad (or triangle configuration) is used as the building block for forming clusters. We apply our methodology to four real networks with encouraging results. Monte Carlo simulation results suggest that, on average, the proposed method yields good clustering performance on synthetic benchmark graphs, relative to other popular methods.  相似文献   

7.
Social network analysis is an important analytic tool to forecast social trends by modeling and monitoring the interactions between network members. This paper proposes an extension of a statistical process control method to monitor social networks by determining the baseline periods when the reference network set is collected. We consider probability density profile (PDP) to identify baseline periods using Poisson regression to model the communications between members. Also, Hotelling T2 and likelihood ratio test (LRT) statistics are developed to monitor the network in Phase I. The results based on signal probability indicate a satisfactory performance for the proposed method.  相似文献   

8.
ABSTRACT

In the stepwise procedure of selection of a fixed or a random explanatory variable in a mixed quantitative linear model with errors following a Gaussian stationary autocorrelated process, we have studied the efficiency of five estimators relative to Generalized Least Squares (GLS): Ordinary Least Squares (OLS), Maximum Likelihood (ML), Restricted Maximum Likelihood (REML), First Differences (FD), and First-Difference Ratios (FDR). We have also studied the validity and power of seven derived testing procedures, to assess the significance of the slope of the candidate explanatory variable x 2 to enter the model in which there is already one regressor x 1. In addition to five testing procedures of the literature, we considered the FDR t-test with n ? 3 df and the modified t-test with n? ? 3 df for partial correlations, where n? is Dutilleul's effective sample size. Efficiency, validity, and power were analyzed by Monte Carlo simulations, as functions of the nature, fixed vs. random (purely random or autocorrelated), of x 1 and x 2, the sample size and the autocorrelation of random terms in the regression model. We report extensive results for the autocorrelation structure of first-order autoregressive [AR(1)] type, and discuss results we obtained for other autocorrelation structures, such as spherical semivariogram, first-order moving average [MA(1)] and ARMA(1,1), but we could not present because of space constraints. Overall, we found that:
  1. the efficiency of slope estimators and the validity of testing procedures depend primarily on the nature of x 2, but not on that of x 1;

  2. FDR is the most inefficient slope estimator, regardless of the nature of x 1 and x 2;

  3. REML is the most efficient of the slope estimators compared relative to GLS, provided the specified autocorrelation structure is correct and the sample size is large enough to ensure the convergence of its optimization algorithm;

  4. the FDR t-test, the modified t-test and the REML t-test are the most valid of the testing procedures compared, despite the inefficiency of the FDR and OLS slope estimators for the former two;

  5. the FDR t-test, however, suffers from a lack of power that varies with the nature of x 1 and x 2; and

  6. the modified t-test for partial correlations, which does not require the specification of an autocorrelation structure, can be recommended when x 1 is fixed or random and x 2 is random, whether purely random or autocorrelated. Our results are illustrated by the environmental data that motivated our work.

  相似文献   

9.
Affiliation network is one kind of two-mode social network with two different sets of nodes (namely, a set of actors and a set of social events) and edges representing the affiliation of the actors with the social events. Although a number of statistical models are proposed to analyze affiliation networks, the asymptotic behaviors of the estimator are still unknown or have not been properly explored. In this article, we study an affiliation model with the degree sequence as the exclusively natural sufficient statistic in the exponential family distributions. We establish the uniform consistency and asymptotic normality of the maximum likelihood estimator when the numbers of actors and events both go to infinity. Simulation studies and a real data example demonstrate our theoretical results.  相似文献   

10.
ABSTRACT

The correlation coefficient (CC) is a standard measure of a possible linear association between two continuous random variables. The CC plays a significant role in many scientific disciplines. For a bivariate normal distribution, there are many types of confidence intervals for the CC, such as z-transformation and maximum likelihood-based intervals. However, when the underlying bivariate distribution is unknown, the construction of confidence intervals for the CC is not well-developed. In this paper, we discuss various interval estimation methods for the CC. We propose a generalized confidence interval for the CC when the underlying bivariate distribution is a normal distribution, and two empirical likelihood-based intervals for the CC when the underlying bivariate distribution is unknown. We also conduct extensive simulation studies to compare the new intervals with existing intervals in terms of coverage probability and interval length. Finally, two real examples are used to demonstrate the application of the proposed methods.  相似文献   

11.
We consider a social network from which one observes not only network structure (i.e., nodes and edges) but also a set of labels (or tags, keywords) for each node (or user). These labels are self-created and closely related to the user’s career status, life style, personal interests, and many others. Thus, they are of great interest for online marketing. To model their joint behavior with network structure, a complete data model is developed. The model is based on the classical p1 model but allows the reciprocation parameter to be label-dependent. By focusing on connected pairs only, the complete data model can be generalized into a conditional model. Compared with the complete data model, the conditional model specifies only the conditional likelihood for the connected pairs. As a result, it suffers less risk from model misspecification. Furthermore, because the conditional model involves connected pairs only, the computational cost is much lower. The resulting estimator is consistent and asymptotically normal. Depending on the network sparsity level, the convergence rate could be different. To demonstrate its finite sample performance, numerical studies (based on both simulated and real datasets) are presented.  相似文献   

12.
In this paper we develop relatively easy methods for constructing hypercubic designs from symmetrical factorial experiments for t=v m treatments with v=2, 3. The proposed methods are easy to use and are flexible in terms of choice of possible block sizes.  相似文献   

13.
The paper compares several methods for computing robust 1-α confidence intervals for σ 1 2-σ 2 2, or σ 1 2/σ 2 2, where σ 1 2 and σ 2 2 are the population variances corresponding to two independent treatment groups. The emphasis is on a Box-Scheffe approach when distributions have different shapes, and so the results reported here have implications about comparing means. The main result is that for unequal sample sizes, a Box-Scheffe approach can be considerably less robust than indicated by past investigations. Several other procedures for comparing variances, not based on a Box-Scheffe approach, were also examined and found to be highly unsatisfactory although previously published papers found them to be robust when the distributions have identical shapes. Included is a new result on why the procedures examined here are not robust, and an illustration that increasing σ 1 2-σ 2 2 can reduce power in certain situations. Constants needed to apply Dunnett’s robust comparison of means are included.  相似文献   

14.
Preferential attachment in a directed scale-free graph is an often used paradigm for modeling the evolution of social networks. Social network data is usually given in a format allowing recovery of the number of nodes with in-degree i and out-degree j. Assuming a model with preferential attachment, formal statistical procedures for estimation can be based on such data summaries. Anticipating the statistical need for such node-based methods, we prove asymptotic normality of the node counts. Our approach is based on a martingale construction and a martingale central limit theorem.  相似文献   

15.
The allometric extension model is a multivariate regression model recently proposed by Tarpey and Ivey (2006 Tarpey, T., Ivey, C.T. (2006). Allometric extension for multivariate regression. J. Data Sci. 4:479495. [Google Scholar]). This model holds when the matrix of covariances between the variables in the response vector y and the variables in the vector of regressors x has a particular structure. In this paper, we consider tests of hypotheses for this structure when (y′, x′)′ has a multivariate normal distribution. In particular, we investigate the likelihood ratio test and a Wald test.  相似文献   

16.
《随机性模型》2013,29(2-3):695-724
Abstract

We consider two variants of a two-station tandem network with blocking. In both variants the first server ceases to work when the queue length at the second station hits a ‘blocking threshold.’ In addition, in variant 2 the first server decreases its service rate when the second queue exceeds a ‘slow-down threshold, ’ which is smaller than the blocking level. In both variants the arrival process is Poisson and the service times at both stations are exponentially distributed. Note, however, that in case of slow-downs, server 1 works at a high rate, a slow rate, or not at all, depending on whether the second queue is below or above the slow-down threshold or at the blocking threshold, respectively. For variant 1, i.e., only blocking, we concentrate on the geometric decay rate of the number of jobs in the first buffer and prove that for increasing blocking thresholds the sequence of decay rates decreases monotonically and at least geometrically fast to max1, ρ2}, where ρ i is the load at server i. The methods used in the proof also allow us to clarify the asymptotic queue length distribution at the second station. Then we generalize the analysis to variant 2, i.e., slow-down and blocking, and establish analogous results.  相似文献   

17.
ABSTRACT

This short paper proves inequalities that restrict the magnitudes of the partial correlations in star-shaped structures in Gaussian graphical models. These inequalities have to be satisfied by distributions that are used for generating simulated data to test structure-learning algorithms, but methods that have been used to create such distributions do not always ensure that they are. The inequalities are also noteworthy because stars are common and meaningful in real-world networks.  相似文献   

18.
We obtain adjustments to the profile likelihood function in Weibull regression models with and without censoring. Specifically, we consider two different modified profile likelihoods: (i) the one proposed by Cox and Reid [Cox, D.R. and Reid, N., 1987, Parameter orthogonality and approximate conditional inference. Journal of the Royal Statistical Society B, 49, 1–39.], and (ii) an approximation to the one proposed by Barndorff–Nielsen [Barndorff–Nielsen, O.E., 1983, On a formula for the distribution of the maximum likelihood estimator. Biometrika, 70, 343–365.], the approximation having been obtained using the results by Fraser and Reid [Fraser, D.A.S. and Reid, N., 1995, Ancillaries and third-order significance. Utilitas Mathematica, 47, 33–53.] and by Fraser et al. [Fraser, D.A.S., Reid, N. and Wu, J., 1999, A simple formula for tail probabilities for frequentist and Bayesian inference. Biometrika, 86, 655–661.]. We focus on point estimation and likelihood ratio tests on the shape parameter in the class of Weibull regression models. We derive some distributional properties of the different maximum likelihood estimators and likelihood ratio tests. The numerical evidence presented in the paper favors the approximation to Barndorff–Nielsen's adjustment.  相似文献   

19.
ABSTRACT

The effect of parameters estimation on profile monitoring methods has only been studied by a few researchers and only the assumption of a normal response variable has been tackled. However, in some practical situation, the normality assumption is violated and the response variable follows a discrete distribution such as Poisson. In this paper, we evaluate the effect of parameters estimation on the Phase II monitoring of Poisson regression profiles by considering two control charts, namely the Hotelling’s T2 and the multivariate exponentially weighted moving average (MEWMA) charts. Simulation studies in terms of the average run length (ARL) and the standard deviation of the run length (SDRL) are carried out to assess the effect of estimated parameters on the performance of Phase II monitoring approaches. The results reveal that both in-control and out-of-control performances of these charts are adversely affected when the regression parameters are estimated.  相似文献   

20.
 2000年货币与金融统计领域的第一部国际标准《货币与金融统计手册(2000)》出台,时隔八年之后,国际货币基金组织又颁布了更具操作性的《货币与金融统计编制指南(2008)》。通过新旧两个版本的比较,本文对总体框架与主要内容的变化进行了述评,进而结合中国实际提出了加强研究并做好准备的建议。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号