首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   564篇
  免费   8篇
管理学   61篇
民族学   3篇
人口学   8篇
丛书文集   15篇
理论方法论   8篇
综合类   62篇
社会学   42篇
统计学   373篇
  2024年   1篇
  2023年   2篇
  2022年   3篇
  2021年   6篇
  2020年   6篇
  2019年   15篇
  2018年   22篇
  2017年   34篇
  2016年   13篇
  2015年   14篇
  2014年   24篇
  2013年   167篇
  2012年   31篇
  2011年   16篇
  2010年   14篇
  2009年   33篇
  2008年   21篇
  2007年   24篇
  2006年   31篇
  2005年   12篇
  2004年   14篇
  2003年   11篇
  2002年   13篇
  2001年   4篇
  2000年   6篇
  1999年   2篇
  1998年   6篇
  1997年   2篇
  1994年   1篇
  1993年   1篇
  1992年   4篇
  1991年   2篇
  1990年   2篇
  1988年   3篇
  1987年   2篇
  1985年   2篇
  1984年   1篇
  1983年   2篇
  1982年   2篇
  1981年   1篇
  1979年   2篇
排序方式: 共有572条查询结果,搜索用时 15 毫秒
221.

The studied topic is motivated by the problem of interlaboratory comparisons. This paper focuses on the confidence interval estimation of the between group variance in the unbalanced heteroscedastic one-way random effects model. Several interval estimators are proposed and compared by means of the simulation study. The most recommended (safest) is the confidence interval based on Bonferroni's inequality.  相似文献   
222.

Evolutionary algorithms are heuristic stochastic search and optimization techniques with principles taken from natural genetics. They are procedures mimicking the evolution process of an initial population through genetic transformations. This paper is concerned with the problem of finding A-optimal incomplete block designs for multiple treatment comparisons represented by a matrix of contrasts. An evolutionary algorithm for searching optimal, or nearly optimal, incomplete block designs is described in detail. Various examples regarding the application of the algorithm to some well-known problems illustrate the good performance of the algorithm  相似文献   
223.
The usual approach for diagnosing collinearity proceeds by centering and standardizing the regressors. The sample correlation matrix of the predictors is then the basic tool for describing approximate linear combinations that may distort the conclusions of a standard least-square analysis. However, as indicated by several authors, centering may eventually fail to detect the sources of ill-conditioning. In spite of this earlier claim, there does not seem to be in the literature a fully clear explanation of the reasons for this bad potential behavior of the traditional strategy for analyzing collinearity. This note studies this issue in some detail. Results derived are motivated by the analysis of a well-known real dataset. Practical conclusions are illustrated with several examples.  相似文献   
224.
This article considers a Bayesian hierarchical model for multiple comparisons in linear models where the population medians satisfy a simple order restriction. Representing the asymmetric Laplace distribution as a scale mixture of normals with an exponential mixing density and a continuous prior restricted to order constraints, a Gibbs sampling algorithm for parameter estimation and simultaneous comparison of treatment medians is proposed. Posterior probabilities of all possible hypotheses on the equality/inequality of treatment medians are estimated using Bayes factors that are computed via the Savage-Dickey density ratios. The performance of the proposed median-based model is investigated in the simulated and real datasets. The results show that the proposed method can outperform the commonly used method that is based on treatment means, when data are from nonnormal distributions.  相似文献   
225.
In this paper, a one-stage multiple comparison procedures with the average for exponential location parameters based on the doubly censored sample under heteroscedasticity is proposed. These intervals can be used to identify a subset which includes all no-worse-than-the-average treatments in an experimental design and to identify better-than-the-average, worse-than-the-average and not-much-different-from-the-average products in agriculture, emerging market, pharmaceutical industries. The critical values are tabulated in a table for practical use. A simulation study on the confidence length and coverage probabilities is done. At last, an example of comparing four drugs in the treatment of leukemia is given to demonstrate the proposed procedures.  相似文献   
226.
Summary.  We construct empirical Bayes intervals for a large number p of means. The existing intervals in the literature assume that variances     are either equal or unequal but known. When the variances are unequal and unknown, the suggestion is typically to replace them by unbiased estimators     . However, when p is large, there would be advantage in 'borrowing strength' from each other. We derive double-shrinkage intervals for means on the basis of our empirical Bayes estimators that shrink both the means and the variances. Analytical and simulation studies and application to a real data set show that, compared with the t -intervals, our intervals have higher coverage probabilities while yielding shorter lengths on average. The double-shrinkage intervals are on average shorter than the intervals from shrinking the means alone and are always no longer than the intervals from shrinking the variances alone. Also, the intervals are explicitly defined and can be computed immediately.  相似文献   
227.
Developing a feasible evaluation plan is challenging when multiple activities, often sponsored by multiple agencies, work together toward a common goal. Often, resources are limited and not every agency's interest can be represented in the final evaluation plan. The article illustrates how the Antecedent Target Measurement (ATM) approach to logic modeling was adapted to meet this challenge. The key adaptation is the context map generated in the first step of the ATM approach. The context map makes visually explicit many of the underlying conditions contributing to a problem as possible. The article also shares how a prioritization matrix can assist the evaluator in filtering through the context map to prioritize the outcomes to be included in the final evaluation plan as well as creating realistic outcomes. This transparent prioritization process can be especially helpful in managing evaluation expectations of multiple agencies with competing interests. Additional strategic planning benefits of the context map include pinpointing redundancies caused by overlapping collaborative efforts, identifying gaps in coverage, and assisting the coordination of multiple stakeholders.  相似文献   
228.
ABSTRACT

In this article, we propose a more general criterion called Sp -criterion, for subset selection in the multiple linear regression Model. Many subset selection methods are based on the Least Squares (LS) estimator of β, but whenever the data contain an influential observation or the distribution of the error variable deviates from normality, the LS estimator performs ‘poorly’ and hence a method based on this estimator (for example, Mallows’ Cp -criterion) tends to select a ‘wrong’ subset. The proposed method overcomes this drawback and its main feature is that it can be used with any type of estimator (either the LS estimator or any robust estimator) of β without any need for modification of the proposed criterion. Moreover, this technique is operationally simple to implement as compared to other existing criteria. The method is illustrated with examples.  相似文献   
229.
230.
This article proposes a class of weighted differences of averages (WDA) statistics to test and estimate possible change-points in variance for time series with weakly dependent blocks and dependent panel data without specific distributional assumptions. We derive the asymptotic distributions of the test statistics for testing the existence of a single variance change-point under the null and local alternatives. We also study the consistency of the change-point estimator. Within the proposed class of the WDA test statistics, a standardized WDA test is shown to have the best consistency rate and is recommended for practical use. An iterative binary searching procedure is suggested for estimating the locations of possible multiple change-points in variance, whose consistency is also established. Simulation studies are conducted to compare detection power and number of wrong rejections of the proposed procedure to that of a cumulative sum (CUSUM) based test and a likelihood ratio-based test. Finally, we apply the proposed method to a stock index dataset and an unemployment rate dataset. Supplementary materials for this article are available online.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号