首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   181篇
  免费   6篇
管理学   21篇
理论方法论   2篇
综合类   2篇
社会学   1篇
统计学   161篇
  2023年   1篇
  2022年   2篇
  2021年   2篇
  2020年   4篇
  2019年   5篇
  2018年   3篇
  2017年   18篇
  2016年   3篇
  2015年   4篇
  2014年   4篇
  2013年   62篇
  2012年   13篇
  2011年   4篇
  2010年   6篇
  2009年   8篇
  2008年   12篇
  2007年   1篇
  2006年   3篇
  2005年   3篇
  2004年   4篇
  2003年   1篇
  2002年   4篇
  2001年   4篇
  1999年   1篇
  1998年   3篇
  1997年   2篇
  1993年   1篇
  1992年   1篇
  1991年   1篇
  1989年   1篇
  1988年   1篇
  1985年   1篇
  1984年   1篇
  1983年   1篇
  1981年   1篇
  1978年   1篇
排序方式: 共有187条查询结果,搜索用时 46 毫秒
1.
The close relationship between quality and maintenance of manufacturing systems has contributed to the development of integrated models which use the concept of statistical process control (SPC) and maintenance. This article demonstrates the integration of the Shewhart individual-residual (ZX ? Ze) joint control chart and maintenance for two-stage dependent processes by jointly optimizing their policies to minimize the expected total costs associated with quality, maintenance and inspection. To evaluate the effectiveness of the proposed model, two stand-alone models—a maintenance model and an SPC model—are proposed. Then a numerical example is given to illustrate the application of the proposed integrated model. The results show that the integrated model outperforms the two stand-alone models with regard to the expected cost per unit time. Finally, a sensitivity analysis is conducted to develop insights into time parameters and cost parameters that influence the integration efforts.  相似文献   
2.
Measuring and improving the efficiency of the Chinese commercial banking system has recently attracted increasing interest. Few studies, however, have adopted the two-stage network DEA to explore this issue in the Chinese context. Because the entire operational process of the banking system could be divided into two sub-processes (deposit producing and profit earning), the evaluation of the sub-process efficiencies could be used to assist in identifying the sources of the inefficiency of the entire banking system. In this study, we utilize the network DEA approach to disaggregate, evaluate and test the efficiencies of 16 major Chinese commercial banks during the third round of the Chinese banking reform period (2003–2011) with the variable returns to scale setting and the consideration of undesirable/bad output. The main findings of this study are as follows: (i) the two-stage DEA model is more effective than the conventional black box DEA model in identifying the inefficiency of banking system, and the inefficiency of the Chinese banking system primarily results from the inefficiency of its deposit producing sub-process; (ii) the overall efficiency of the Chinese banking system improves over the study period because of the reform; (iii) the state-owned commercial banks (SOBs) appear to be more overall efficient than the joint-stock commercial banks (JSBs) only in the pre-reform period, and the efficiency difference between the SOBs and the JSBs is reduced over the post-reform period; (iv) the disposal of non-performing loans (NPLs) from the Chinese banking system in general explains its efficiency improvement, and the joint-equity reform of the SOBs specifically increases their efficiencies.  相似文献   
3.
Tactical production-distribution planning models have attracted a great deal of attention in the past decades. In these models, production and distribution decisions are considered simultaneously such that the combined plans are more advantageous than the plans resolved in a hierarchical planning process. We consider a two-stage production process, where in the first stage raw materials are transformed into continuous resources that feed the discrete production of end products in the second stage. Moreover, the setup times and costs of resources depend on the sequence in which they are processed in the first stage. The minimum scheduling unit is the product family which consists of products sharing common resources and manufacturing processes. Based on different mathematical modelling approaches to the production in the first stage, we develop a sequence-oriented formulation and a product-oriented formulation, and propose decomposition-based heuristics to solve this problem efficiently. By considering these dependencies arising in practical production processes, our model can be applied to various industrial cases, such as the beverage industry or the steel industry. Computation tests on instances from an industrial application are provided at the end of the paper.  相似文献   
4.
A. Galbete  J.A. Moler 《Statistics》2016,50(2):418-434
In a randomized clinical trial, response-adaptive randomization procedures use the information gathered, including the previous patients' responses, to allocate the next patient. In this setting, we consider randomization-based inference. We provide an algorithm to obtain exact p-values for statistical tests that compare two treatments with dichotomous responses. This algorithm can be applied to a family of response adaptive randomization procedures which share the following property: the distribution of the allocation rule depends only on the imbalance between treatments and on the imbalance between successes for treatments 1 and 2 in the previous step. This family includes some outstanding response adaptive randomization procedures. We study a randomization test to contrast the null hypothesis of equivalence of treatments and we show that this test has a similar performance to that of its parametric counterpart. Besides, we study the effect of a covariate in the inferential process. First, we obtain a parametric test, constructed assuming a logit model which relates responses to treatments and covariate levels, and we give conditions that guarantee its asymptotic normality. Finally, we show that the randomization test, which is free of model specification, performs as well as the parametric test that takes the covariate into account.  相似文献   
5.
In this paper, Abdelfatah and Mazloum's (2015) two-stage randomized response model is extended to unequal probability sampling and stratified unequal probability sampling, both with and without replacement. The extended models result in more efficient estimators than Lee et al.'s (2014) estimators of the proportion of the population having a sensitive attribute.  相似文献   
6.
Modelling udder infection data using copula models for quadruples   总被引:1,自引:0,他引:1  
We study copula models for correlated infection times in the four udder quarters of dairy cows. Both a semi-parametric and a nonparametric approach are considered to estimate the marginal survival functions, taking into account the effect of a binary udder quarter level covariate. We use a two-stage estimation approach and we briefly discuss the asymptotic behaviour of the estimators obtained in the first and the second stage of the estimation. A pseudo-likelihood ratio test is used to select an appropriate copula from the power variance copula family that describes the association between the outcomes in a cluster. We propose a new bootstrap algorithm to obtain the p-value for this test. This bootstrap algorithm also provides estimates for the standard errors of the estimated parameters in the copula. The proposed methods are applied to the udder infection data. A small simulation study for a setting similar to the setting of the udder infection data gives evidence that the proposed method provides a valid approach to select an appropriate copula within the power variance copula family.  相似文献   
7.
Doubly adaptive biased coin design (DBCD) is an important family of response-adaptive randomization procedures for clinical trials. It uses sequentially updated estimation to skew the allocation probability to favor the treatment that has performed better thus far. An important assumption for the DBCD is the homogeneity assumption for the patient responses. However, this assumption may be violated in many sequential experiments. Here we prove the robustness of the DBCD against certain time trends in patient responses. Strong consistency and asymptotic normality of the design are obtained under some widely satisfied conditions. Also, we propose a general weighted likelihood method to reduce the bias caused by the heterogeneity in the inference after a trial. Some numerical studies are also presented to illustrate the finite sample properties of DBCD.  相似文献   
8.
Various statistical tests have been developed for testing the equality of means in matched pairs with missing values. However, most existing methods are commonly based on certain distributional assumptions such as normality, 0-symmetry or homoscedasticity of the data. The aim of this paper is to develop a statistical test that is robust against deviations from such assumptions and also leads to valid inference in case of heteroscedasticity or skewed distributions. This is achieved by applying a clever randomization approach to handle missing data. The resulting test procedure is not only shown to be asymptotically correct but is also finitely exact if the distribution of the data is invariant with respect to the considered randomization group. Its small sample performance is further studied in an extensive simulation study and compared to existing methods. Finally, an illustrative data example is analysed.  相似文献   
9.
We compare posterior and predictive estimators and probabilities in response-adaptive randomization designs for two- and three-group clinical trials with binary outcomes. Adaptation based upon posterior estimates are discussed, as are two predictive probability algorithms: one using the traditional definition, the other using a skeptical distribution. Optimal and natural lead-in designs are covered. Simulation studies show that efficacy comparisons lead to more adaptation than center comparisons, though at some power loss, skeptically predictive efficacy comparisons and natural lead-in approaches lead to less adaptation but offer reduced allocation variability. Though nuanced, these results help clarify the power-adaptation trade-off in adaptive randomization.  相似文献   
10.
A slacks-based inefficiency measure for a two-stage system with bad outputs   总被引:1,自引:0,他引:1  
We model the performance of DMUs (decision-making units) using a two-stage network model. In the first stage of production DMUs use inputs to produce an intermediate output that becomes an input to a second stage where final outputs are produced. Previous black box DEA models allowed for non-radial scaling of outputs and inputs and accounted for slacks in the constraints that define the technology. We extend these models and build a performance measure that accounts for a network structure of production. We use our method to estimate the performance of Japanese banks, which use labor, physical capital, and financial equity capital in a first stage to produce an intermediate output of deposits. In the second stage, those deposits become an input in the production of loans and securities investments. The network estimates reveal greater bank inefficiency than do the estimates that treat the bank production process as a black box with all production taking place in a single stage.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号