首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   563篇
  免费   8篇
管理学   61篇
民族学   3篇
人口学   8篇
丛书文集   15篇
理论方法论   8篇
综合类   61篇
社会学   42篇
统计学   373篇
  2024年   1篇
  2023年   2篇
  2022年   3篇
  2021年   6篇
  2020年   6篇
  2019年   15篇
  2018年   22篇
  2017年   34篇
  2016年   12篇
  2015年   14篇
  2014年   24篇
  2013年   167篇
  2012年   31篇
  2011年   16篇
  2010年   14篇
  2009年   33篇
  2008年   21篇
  2007年   24篇
  2006年   31篇
  2005年   12篇
  2004年   14篇
  2003年   11篇
  2002年   13篇
  2001年   4篇
  2000年   6篇
  1999年   2篇
  1998年   6篇
  1997年   2篇
  1994年   1篇
  1993年   1篇
  1992年   4篇
  1991年   2篇
  1990年   2篇
  1988年   3篇
  1987年   2篇
  1985年   2篇
  1984年   1篇
  1983年   2篇
  1982年   2篇
  1981年   1篇
  1979年   2篇
排序方式: 共有571条查询结果,搜索用时 15 毫秒
71.
Abstract

It is common to monitor several correlated quality characteristics using the Hotelling's T 2 statistic. However, T 2 confounds the location shift with scale shift and consequently it is often difficult to determine the factors responsible for out of control signal in terms of the process mean vector and/or process covariance matrix. In this paper, we propose a diagnostic procedure called ‘D-technique’ to detect the nature of shift. For this purpose, two sets of regression equations, each consisting of regression of a variable on the remaining variables, are used to characterize the ‘structure’ of the ‘in control’ process and that of ‘current’ process. To determine the sources responsible for an out of control state, it is shown that it is enough to compare these two structures using the dummy variable multiple regression equation. The proposed method is operationally simpler and computationally advantageous over existing diagnostic tools. The technique is illustrated with various examples.  相似文献   
72.
ABSTRACT

The randomized response technique is an effective survey method designed to elicit sensitive information while ensuring the privacy of the respondents. In this article, we present some new results on the randomization response model in situations wherein one or two response variables are assumed to follow a multinomial distribution. For a single sensitive question, we use the well-known Hopkins randomization device to derive estimates, both under the assumption of truthful and untruthful responses, and present a technique for making pairwise comparisons. When there are two sensitive questions of interest, we derive a Pearson product moment correlation estimator based on the multinomial model assumption. This estimator may be used to quantify the linear relationship between two variables when multinomial response data are observed according to a randomized-response protocol.  相似文献   
73.
The following two predictors are compared for time series with systematically missing observations: (a) A time series model is fitted to the full series Xt , and forecasts are based on this model, (b) A time series model is fitted to the series with systematically missing observations Y τ, and forecasts are based on the resulting model. If the data generation processes are known vector autoregressive moving average (ARMA) processes, the first predictor is at least as efficient as the second one in a mean squared error sense. Conditions are given for the two predictors to be identical. If only the ARMA orders of the generation processes are known and the coefficients are estimated, or if the process orders and coefficients are estimated, the first predictor is again, in general, superior. There are, however, exceptions in which the second predictor, using seemingly less information, may be better. These results are discussed, using both asymptotic theory and small sample simulations. Some economic time series are used as illustrative examples.  相似文献   
74.
Tests of significance are often made in situations where the standard assumptions underlying the probability calculations do not hold. As a result, the reported significance levels become difficult to interpret. This article sketches an alternative interpretation of a reported significance level, valid in considerable generality. This level locates the given data set within the spectrum of other data sets derived from the given one by an appropriate class of transformations. If the null hypothesis being tested holds, the derived data sets should be equivalent to the original one. Thus, a small reported significance level indicates an unusual data set. This development parallels that of randomization tests, but there is a crucial technical difference: our approach involves permuting observed residuals; the classical randomization approach involves permuting unobservable, or perhaps nonexistent, stochastic disturbance terms.  相似文献   
75.
We address the problem of optimally forecasting a binary variable for a heterogeneous group of decision makers facing various (binary) decision problems that are tied together only by the unknown outcome. A typical example is a weather forecaster who needs to estimate the probability of rain tomorrow and then report it to the public. Given a conditional probability model for the outcome of interest (e.g., logit or probit), we introduce the idea of maximum welfare estimation and derive conditions under which traditional estimators, such as maximum likelihood or (nonlinear) least squares, are asymptotically socially optimal even when the underlying model is misspecified.  相似文献   
76.
A study is carried out of a sampling from a half-normal and exponential distributions to develop a test of hypothesis on the mean. Although these distributions are similar, the corresponding uniformly most paerful test statistics are different. The exact distributions of these statistics my be written in terms of the incomplete gamma function. If the experimental data my be fitted by either distributions, it is advisable to carryout the test based on the half-normal distribution as it is generally more powerful than the one based on the exponential one.  相似文献   
77.
Abstract

In survival or reliability data analysis, it is often useful to estimate the quantiles of the lifetime distribution, such as the median time to failure. Different nonparametric methods can construct confidence intervals for the quantiles of the lifetime distributions, some of which are implemented in commonly used statistical software packages. We here investigate the performance of different interval estimation procedures under a variety of settings with different censoring schemes. Our main objectives in this paper are to (i) evaluate the performance of confidence intervals based on the transformation approach commonly used in statistical software, (ii) introduce a new density-estimation-based approach to obtain confidence intervals for survival quantiles, and (iii) compare it with the transformation approach. We provide a comprehensive comparative study and offer some useful practical recommendations based on our results. Some numerical examples are presented to illustrate the methodologies developed.  相似文献   
78.
ABSTRACT

Advances in statistical computing software have led to a substantial increase in the use of ordinary least squares (OLS) regression models in the engineering and applied statistics communities. Empirical evidence suggests that data sets can routinely have 10% or more outliers in many processes. Unfortunately, these outliers typically will render the OLS parameter estimates useless. The OLS diagnostic quantities and graphical plots can reliably identify a few outliers; however, they significantly lose power with increasing dimension and number of outliers. Although there have been recent advances in the methods that detect multiple outliers, improvements are needed in regression estimators that can fit well in the presence of outliers. We introduce a robust regression estimator that performs well regardless of outlier quantity and configuration. Our studies show that the best available estimators are vulnerable when the outliers are extreme in the regressor space (high leverage). Our proposed compound estimator modifies recently published methods with an improved initial estimate and measure of leverage. Extensive performance evaluations indicate that the proposed estimator performs the best and consistently fits the bulk of the data when outliers are present. The estimator, implemented in standard software, provides researchers and practitioners a tool for the model-building process to protect against the severe impact from multiple outliers.  相似文献   
79.
A graphical procedure for the display of treatment means that enables one to determine the statistical significance of the observed differences is presented. It is shown that the widely used least significant difference and honestly significant difference statistics can be used to construct plots in which any two means whose uncertainty intervals do not overlap are significantly different at the assigned probability level. It is argued that these plots, because of their straightforward decision rules, are more effective than those that show the observed means with standard errors or confidence limits. Several examples of the proposed displays are included to illustrate the procedure.  相似文献   
80.
Statistical hypotheses and test statistics are Boolean functions that can be manipulated using the tools of Boolean algebra. These tools are particularly useful for exploring multiple comparisons or simultaneous inference theory, in which multiparameter hypotheses or multiparameter test statistics may be decomposed into combinations of uniparameter hypotheses or uniparameter tests. These concepts are illustrated with both finite and infinite decompositions of familiar multiparameter hypotheses and tests. The corresponding decompositions of acceptance regions and rejection regions are also shown. Finally, the close relationship between hypothesis and test decompositions and Roy's union—intersection principle is demonstrated by a derivation of the union—intersection test of the univariate general linear hypothesis.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号