首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   310篇
  免费   2篇
  国内免费   1篇
管理学   7篇
民族学   2篇
人口学   1篇
理论方法论   4篇
综合类   19篇
社会学   6篇
统计学   274篇
  2023年   1篇
  2022年   1篇
  2021年   2篇
  2020年   3篇
  2019年   4篇
  2018年   8篇
  2017年   16篇
  2016年   6篇
  2015年   3篇
  2014年   16篇
  2013年   112篇
  2012年   23篇
  2011年   5篇
  2010年   10篇
  2009年   4篇
  2008年   16篇
  2007年   4篇
  2006年   6篇
  2005年   6篇
  2004年   4篇
  2003年   3篇
  2002年   8篇
  2001年   9篇
  2000年   7篇
  1999年   11篇
  1998年   2篇
  1997年   2篇
  1995年   1篇
  1994年   2篇
  1993年   1篇
  1992年   1篇
  1991年   3篇
  1989年   2篇
  1988年   1篇
  1987年   1篇
  1985年   1篇
  1984年   1篇
  1983年   3篇
  1982年   1篇
  1981年   1篇
  1979年   1篇
  1976年   1篇
排序方式: 共有313条查询结果,搜索用时 15 毫秒
11.
The two-way two-levels crossed factorial design is a commonly used design by practitioners at the exploratory phase of industrial experiments. The F-test in the usual linear model for analysis of variance (ANOVA) is a key instrument to assess the impact of each factor and of their interactions on the response variable. However, if assumptions such as normal distribution and homoscedasticity of errors are violated, the conventional wisdom is to resort to nonparametric tests. Nonparametric methods, rank-based as well as permutation, have been a subject of recent investigations to make them effective in testing the hypotheses of interest and to improve their performance in small sample situations. In this study, we assess the performances of some nonparametric methods and, more importantly, we compare their powers. Specifically, we examine three permutation methods (Constrained Synchronized Permutations, Unconstrained Synchronized Permutations and Wald-Type Permutation Test), a rank-based method (Aligned Rank Transform) and a parametric method (ANOVA-Type Test). In the simulations, we generate datasets with different configurations of distribution of errors, variance, factor's effect and number of replicates. The objective is to elicit practical advice and guides to practitioners regarding the sensitivity of the tests in the various configurations, the conditions under which some tests cannot be used, the tradeoff between power and type I error, and the bias of the power on one main factor analysis due to the presence of effect of the other factor. A dataset from an industrial engineering experiment for thermoformed packaging production is used to illustrate the application of the various methods of analysis, taking into account the power of the test suggested by the objective of the experiment.  相似文献   
12.
The on-going migration of refugees to Europe has fuelled debates about the indigence of refugees and the perceived legitimacy of individual claims for asylum in different receptive countries. With a substantial body of research that has investigated the antecedents of attitudes towards immigrants, evidence on whether those underlying assumptions hold true for refugees as well remains scarce. The paper applies the framework of Intergroup Threat Theory to arrive at competing hypotheses with regard to the acceptance levels of refugees. We use pooled data from two probabilistic samples drawn in the German city of Dresden and apply a confounded factorial survey design to extend previous research on attitudes towards refugees. We find that natives perceive political persecution and war as justified reasons for seeking asylum in Germany, while socio-demographic characteristics of respondents and refugees are of minor importance. Foremost, the individual level of respondent’s fear of crime represents a crucial moderator of the perception of refugees as threatening.  相似文献   
13.
We propose a specific general Markov-regime switching estimation both in the long memory parameter d and the mean of a time series. We employ Viterbi algorithm that combines the Viterbi procedures in two state Markov-switching parameter estimation. It is well-known that existence of mean break and long memory in time series can be easily confused with each other in most cases. Thus, we aim at observing the deviation and interaction of mean and d estimates for different cases. A Monte Carlo experiment reveals that the finite sample performance of the proposed algorithm for a simple mixture model of Markov-switching mean and d changes with respect to the fractional integrating parameters and the mean values for the two regimes.  相似文献   
14.
Recently, many researchers have devoted themselves to the investigation on the number of replicates needed for experiments in blocks of size two. In practice, experiments in blocks of size four might be more useful than those in blocks of size two. To estimate the main effects and two-factor interactions from a two-level factorial experiment in blocks, we might need many replicates. This article investigates designs with the least number of replicates for factorial experiments in blocks of size four. The methods to obtain such designs are presented.  相似文献   
15.
We present a surprising though obvious result that seems to have been unnoticed until now. In particular, we demonstrate the equivalence of two well-known problems—the optimal allocation of the fixed overall sample size n among L strata under stratified random sampling and the optimal allocation of the H = 435 seats among the 50 states for apportionment of the U.S. House of Representatives following each decennial census. In spite of the strong similarity manifest in the statements of the two problems, they have not been linked and they have well-known but different solutions; one solution is not explicitly exact (Neyman allocation), and the other (equal proportions) is exact. We give explicit exact solutions for both and note that the solutions are equivalent. In fact, we conclude by showing that both problems are special cases of a general problem. The result is significant for stratified random sampling in that it explicitly shows how to minimize sampling error when estimating a total TY while keeping the final overall sample size fixed at n; this is usually not the case in practice with Neyman allocation where the resulting final overall sample size might be near n + L after rounding. An example reveals that controlled rounding with Neyman allocation does not always lead to the optimum allocation, that is, an allocation that minimizes variance.  相似文献   
16.
Effective recruitment is a prerequisite for successful execution of a clinical trial. ALLHAT, a large hypertension treatment trial (N = 42,418), provided an opportunity to evaluate adaptive modeling of recruitment processes using conditional moving linear regression. Our statistical modeling of recruitment, comparing Brownian and fractional Brownian motion, indicates that fractional Brownian motion combined with moving linear regression is better than classic Brownian motion in terms of higher conditional probability of achieving a global recruitment goal in 4-week ahead projections. Further research is needed to evaluate how recruitment modeling can assist clinical trialists in planning and executing clinical trials.  相似文献   
17.
In recent years, the issue of water allocation among competing users has been of great concern for many countries due to increasing water demand from population growth and economic development. In water management systems, the inherent uncertainties and their potential interactions pose a significant challenge for water managers to identify optimal water-allocation schemes in a complex and uncertain environment. This paper thus proposes a methodology that incorporates optimization techniques and statistical experimental designs within a general framework to address the issues of uncertainty and risk as well as their correlations in a systematic manner. A water resources management problem is used to demonstrate the applicability of the proposed methodology. The results indicate that interval solutions can be generated for the objective function and decision variables, and a number of decision alternatives can be obtained under different policy scenarios. The solutions with different risk levels of constraint violation can help quantify the relationship between the economic objective and the system risk, which is meaningful for supporting risk management. The experimental data obtained from the Taguchi's orthogonal array design are useful for identifying the significant factors affecting the means of total net benefits. Then the findings from the mixed-level factorial experiment can help reveal the latent interactions between those significant factors at different levels and their effects on the modeling response.  相似文献   
18.
The derivation of a simpie mexhoa for confounding in mixed factorial experiments from an isomorphism of finite abelian groups is presented. The theoretical bases of confounding procedures that use modular arithmetic for such experiments are compared.  相似文献   
19.
《Econometric Reviews》2013,32(4):397-417
ABSTRACT

Many recent papers have used semiparametric methods, especially the log-periodogram regression, to detect and estimate long memory in the volatility of asset returns. In these papers, the volatility is proxied by measures such as squared, log-squared, and absolute returns. While the evidence for the existence of long memory is strong using any of these measures, the actual long memory parameter estimates can be sensitive to which measure is used. In Monte-Carlo simulations, I find that if the data is conditionally leptokurtic, the log-periodogram regression estimator using squared returns has a large downward bias, which is avoided by using other volatility measures. In United States stock return data, I find that squared returns give much lower estimates of the long memory parameter than the alternative volatility measures, which is consistent with the simulation results. I conclude that researchers should avoid using the squared returns in the semiparametric estimation of long memory volatility dependencies.  相似文献   
20.
ABSTRACT

Split-plot designs have been utilized in factorial experiments with some factors applied to larger units and others to smaller units. Such designs with low aberration are preferred when the experimental size and the number of factors considered in both whole plot and subplot are determined. The minimum aberration split-plot designs can be obtained using either computer algorithms or the exhausted search. In this article, we propose a simple, easy-to-operate approach by using two ordered sequences of columns from two orthogonal arrays in obtaining minimum aberration split-plot designs for experiments of sizes 16 and 32.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号