首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6650篇
  免费   149篇
  国内免费   93篇
管理学   817篇
民族学   33篇
人才学   2篇
人口学   81篇
丛书文集   401篇
理论方法论   169篇
综合类   3732篇
社会学   302篇
统计学   1355篇
  2024年   7篇
  2023年   47篇
  2022年   55篇
  2021年   78篇
  2020年   121篇
  2019年   121篇
  2018年   145篇
  2017年   214篇
  2016年   166篇
  2015年   203篇
  2014年   290篇
  2013年   785篇
  2012年   421篇
  2011年   368篇
  2010年   305篇
  2009年   305篇
  2008年   342篇
  2007年   393篇
  2006年   406篇
  2005年   400篇
  2004年   331篇
  2003年   322篇
  2002年   228篇
  2001年   204篇
  2000年   142篇
  1999年   91篇
  1998年   66篇
  1997年   51篇
  1996年   46篇
  1995年   47篇
  1994年   38篇
  1993年   29篇
  1992年   29篇
  1991年   20篇
  1990年   18篇
  1989年   18篇
  1988年   8篇
  1987年   9篇
  1986年   3篇
  1985年   4篇
  1984年   4篇
  1983年   2篇
  1982年   2篇
  1981年   1篇
  1980年   1篇
  1979年   2篇
  1978年   1篇
  1977年   3篇
排序方式: 共有6892条查询结果,搜索用时 0 毫秒
51.
We compare the performance of seven robust estimators for the parameter of an exponential distribution. These include the debiased median and two optimally-weighted one-sided trimmed means. We also introduce four new estimators: the Transform, Bayes, Scaled and Bicube estimators. We make the Monte Carlo comparisons for three sample sizes and six situations. We evaluate the comparisons in terms of a new performance measure, Mean Absolute Differential Error (MADE), and a premium/protection interpretation of MADE. We organize the comparisons to enhance statistical power by making maximal use of common random deviates. The Transform estimator provides the best performance as judged by MADE. The singly-trimmed mean and Transform method define the efficient frontier of premium/protection.  相似文献   
52.
The performance of the usual Shewhart control charts for monitoring process means and variation can be greatly affected by nonnormal data or subgroups that are correlated. Define the αk-risk for a Shewhart chart to be the probability that at least one “out-of-control” subgroup occurs in k subgroups when the control limits are calculated from the k subgroups. Simulation results show that the αk-risks can be quite large even for a process with normally distributed, independent subgroups. When the data are nonnormal, it is shown that the αk-risk increases dramatically. A method is also developed for simulating an “in-control” process with correlated subgroups from an autoregressive model. Simulations with this model indicate marked changes in the αk-risks for the Shewhart charts utilizing this type of correlated process data. Therefore, in practice a process should be investigated thoroughly regarding whether or not it is generating normal, independent data before out-of-control points on the control charts are interpreted to be due to some real assignable cause.  相似文献   
53.
Recently, several new applications of control chart procedures for short production runs have been introduced. Bothe (1989) and Burr (1989) proposed the use of control chart statistics which are obtained by scaling the quality characteristic by target values or process estimates of a location and scale parameter. The performance of these control charts can be significantly affected by the use of incorrect scaling parameters, resulting in either an excessive "false alarm rate," or insensitivity to the detection of moderate shifts in the process. To correct for these deficiencies, Quesenberry (1990, 1991) has developed the Q-Chart which is formed from running process estimates of the sample mean and variance. For the case where both the process mean and variance are unknown, the Q-chaxt statistic is formed from the standard inverse Z-transformation of a t-statistic. Q-charts do not perform correctly, however, in the presence of special cause disturbances at process startup. This has recently been supported by results published by Del Castillo and Montgomery (1992), who recommend the use of an alternative control chart procedure which is based upon a first-order adaptive Kalman filter model Consistent with the recommendations by Castillo and Montgomery, we propose an alternative short run control chart procedure which is based upon the second order dynamic linear model (DLM). The control chart is shown to be useful for the early detection of unwanted process trends. Model and control chart parameters are updated sequentially in a Bayesian estimation framework, providing the greatest degree of flexibility in the level of prior information which is incorporated into the model. The result is a weighted moving average control chart statistic which can be used to provide running estimates of process capability. The average run length performance of the control chart is compared to the optimal performance of the exponentially weighted moving average (EWMA) chart, as reported by Gan (1991). Using a simulation approach, the second order DLM control chart is shown to provide better overall performance than the EWMA for short production run applications  相似文献   
54.
In this article the problem of the optimal selection and allocation of time points in repeated measures experiments is considered. D‐ optimal designs for linear regression models with a random intercept and first order auto‐regressive serial correlations are computed numerically and compared with designs having equally spaced time points. When the order of the polynomial is known and the serial correlations are not too small, the comparison shows that for any fixed number of repeated measures, a design with equally spaced time points is almost as efficient as the D‐ optimal design. When, however, there is no prior knowledge about the order of the underlying polynomial, the best choice in terms of efficiency is a D‐ optimal design for the highest possible relevant order of the polynomial. A design with equally‐spaced time points is the second best choice  相似文献   
55.
This is a survey article on known results about analytic solutions and numerical solutions of optimal designs for various regression models for experiments with mixtures. The regression models include polynomial models, models containing homogeneous functions, models containing inverse terms and ratios, log contrast models, models with quantitative variables, and mod els containing the amount of mixture, Optimality criteria considered include D-, A-, E-,φp- and Iλ-Optimalities. Uniform design and uniform optimal design for mixture components, and efficiencies of the {q,2} simplex-controid design are briefly discussed.  相似文献   
56.
Abstract

In this paper, we introduce a version of Hayter and Tsui's statistical test with double sampling for the vector mean of a population under multivariate normal assumption. A study showed that this new test was more or as efficient than the well-known Hotelling's T2 with double sampling. Some nice features of Hayter and Tsui's test are its simplicity of implementation and its capability of identifying the errant variables when the null hypothesis is rejected. Taking that into consideration, a new control chart called HTDS is also introduced as a tool to monitor multivariate process vector mean when using double sampling.  相似文献   
57.
ABSTRACT

Economic statistical designs aim at minimizing the cost of process monitoring when a specific scenario or a set of estimated process and cost parameters is given. But, in practice the process may be affected by more than one scenario which may lead to severe cost penalties if the wrong design is used. Here, we investigate the robust economic statistical design (RESD) of the T2 chart in an attempt to reduce these cost penalties when there are multiple scenarios. Our method is to employ the genetic algorithm (GA) optimization method to minimize the total expected monitoring cost across all distinct scenarios. We illustrate the effectiveness of the method using two numerical examples. Simulation studies indicate that robust economic statistical designs should be encouraged in practice.  相似文献   
58.
59.
Abstract

In choice experiments the process of decision-making can be more complex than the proposed by the Multinomial Logit Model (MNL). In these scenarios, models such as the Nested Multinomial Logit Model (NMNL) are often employed to model a more complex decision-making. Understanding the decision-making process is important in some fields such as marketing. Achieving a precise estimation of the models is crucial to the understanding of this process. To do this, optimal experimental designs are required. To construct an optimal design, information matrix is key. A previous research by others has developed the expression for the information matrix of the two-level NMNL model with two nests: Alternatives nest (J alternatives) and No-Choice nest (1 alternative). In this paper, we developed the likelihood function for a two-stage NMNL model for M nests and we present the expression for the information matrix for 2 nests with any amount of alternatives in them. We also show alternative D-optimal designs for No-Choice scenarios with similar relative efficiency but with less complex alternatives which can help to obtain more reliable answers and one application of these designs.  相似文献   
60.
In this paper, we propose new estimation techniques in connection with the system of S-distributions. Besides “exact” maximum likelihood (ML), we propose simulated ML and a characteristic function-based procedure. The “exact” and simulated likelihoods can be used to provide numerical, MCMC-based Bayesian inferences.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号