首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   410篇
  免费   3篇
管理学   60篇
人口学   7篇
丛书文集   4篇
理论方法论   11篇
综合类   43篇
社会学   4篇
统计学   284篇
  2023年   1篇
  2022年   3篇
  2021年   4篇
  2020年   3篇
  2019年   7篇
  2018年   11篇
  2017年   16篇
  2016年   10篇
  2015年   1篇
  2014年   15篇
  2013年   111篇
  2012年   31篇
  2011年   15篇
  2010年   12篇
  2009年   15篇
  2008年   19篇
  2007年   14篇
  2006年   6篇
  2005年   11篇
  2004年   4篇
  2003年   11篇
  2002年   6篇
  2001年   5篇
  2000年   4篇
  1999年   2篇
  1998年   5篇
  1997年   4篇
  1996年   3篇
  1995年   4篇
  1994年   3篇
  1993年   1篇
  1992年   3篇
  1991年   6篇
  1990年   4篇
  1989年   8篇
  1988年   4篇
  1987年   1篇
  1986年   2篇
  1985年   5篇
  1984年   5篇
  1983年   3篇
  1982年   6篇
  1981年   3篇
  1980年   2篇
  1979年   1篇
  1978年   3篇
排序方式: 共有413条查询结果,搜索用时 187 毫秒
71.
Until recently, a difficulty with applying the Durbin-Watson (DW) test to the dynamic linear regression model has been the lack of appropriate critical values. Inder (1986) used a modified small-disturbance distribution (SDD) to find approximate critical values. King and Wu (1991) showed that the exact SDD of the DW statistic is equivalent to the distribution of the DW statistic from the regression with the lagged dependent variables replaced by their means. Unfortunately, these means are unknown although they could be estimated by the actual variable values. This provides a justification for using the exact critical values of the DW statistic from the regression with the lagged dependent variables treated as non-stochastic regressors. Extensive Monte Carlo experiments are reported in this paper. They show that this approach leads to reasonably accurate critical values, particularly when two lags of the dependent variable are present. Robustness to non-normality is also investigated.  相似文献   
72.
“Prophet theory” quantifies the price a statistician has to pay for his lack of information in stochastic sequences. In a recent paper, Schmitz (1991) gave a game-theoretical interpretation of this situation and he formulated in particular a minimax conjecture for the difference case. In this note we prove that conjecture and, moreover, present minimax ran domized stopping times (minimax procedures for the statistician).  相似文献   
73.
I describe a method that can be used to approximate the solution of the stochastic growth model. The method relies on approximating the return and transition functions of the original problem by taking second-order and first-order Taylor expansions around the steady state of the system. The result is the optimal linear regulator problem.  相似文献   
74.
National policy initiatives require the expenditure of large amounts of resources over several years. It is common for these initiatives to generate large amounts of data that are needed in order to assess their success. Educational policies are an obvious example. Here we concentrate on Mexico׳s “Educational Modernisation Programme” and try to see how this plan has affected efficiency in teaching and research at Mexico׳s universities. We use a combined approach that includes traditional ratios together with Data Envelopment Analysis models. This mixture allows us to assess changes in efficiency at each individual university and explore if these changes are related to teaching, to research, or to both. Using official statistics for 55 universities over a six year period (2007–2012), we have generated 12 ratios and estimated 21 DEA models under different definitions of efficiency. In order to make the results of the analysis accessible to the non-specialist we use models that visualise the main characteristics of the data, in particular scaling models of multivariate statistical analysis. Scaling models highlight the important aspects of the information contained in the data. Because the data is three-way (variables, universities, and years) we have chosen the Individual Differences Scaling model of Carroll and Chang. We complete the paper with a discussion of efficiency evolution in three universities.  相似文献   
75.
Consider the experiment to improve router bit life as reported in Phadke (1986 Phadke, M. S. 1986. Design optimization case studies. AT &; T Technical Journal 65:5168.[Crossref], [Web of Science ®] [Google Scholar]). The goal of the experiment was to increase the life of the router bit before it gets dull, which causes excessive dust formation and consequent expensive cleaning operation to smooth the edges of the boards. A 32-run experimental design was used including seven two-level factors and two four-level factors (cf. Table 1). In this experiment and others, factorial designs with a mixture of two-level and μ( > 2)-level factors may be adopted. Sequential experiments composed of initial experiments and follow-up experiments are widely used to resolve ambiguities involving the aliasing of factorial effects. This article investigates the construction and theoretical properties of optimal designs for sequential experiments with a mixture of α two-level and β μ-level factors for the first time. Constructing optimal design for the router bit life sequential experiment will be discussed for a practical use. From the numerical results, it is found that using a uniform design as the initial experimental design for the router bit life experiment is highly recommended to get an efficient router bit life sequential experimental design. The novelty and significance of the work are evaluated by comparing our results to the existing literature.  相似文献   
76.
《统计学通讯:理论与方法》2012,41(13-14):2570-2587
In a Gauss–Markov Model (GMM) with fixed constraints, all the relevant estimators perfectly satisfy these constraints. As soon as they become stochastic, most estimators are allowed to satisfy them only approximately, thereby leaving room for nonvanishing residuals to describe the deviation from the prior information.

Sometimes, however, linear estimators may be preferred that are able to perfectly reproduce the prior information in form of stochastic constraints, including their variances and covariances. As typical example may be considered the case where a geodetic network ought to be densified without changing the higher-order point coordinates that are usually introduced together with their variances and (some) covariances. Traditional estimators are based on the “Helmert” or “S-transformation,” respectively an adaptation of the fixed-constraints Least-Squares estimator.

Here we show that neither approach generates the optimal reproducing estimator, which will be presented in detail and compared with the other reproducing estimators in terms of their MSE-risks.  相似文献   
77.
米尔利斯认为非线性的个人所得税是最优化的;斯特恩则根据一些不同的劳动供给函数、财政收入的需要和公平观点,提出了最优线性所得税模型。我们需要重新探讨累进性个人所得税制的合理性;最优税制理论未必适用于发展中国家;个人所得税改革应综合考虑效率与公平两大目标。  相似文献   
78.
Ashley (1983) gave a simple condition for determining when a forecast of an explanatory variable (Xt ) is sufficiently inaccurate that direct replacement of Xt by the forecast yields worse forecasts of the dependent variable than does respecification of the equation to omit Xt . Many available macroeconomic forecasts were shown to be of limited usefulness in direct replacement. Direct replacement, however, is not optimal if the forecast's distribution is known. Here optimal linear forms in commercial forecasts of several macroeconomic variables are obtained by using estimates of their distributions. Although they are an improvement on the raw forecasts (direct replacement), these optimal forms are still too inaccurate to be useful in replacing the actual explanatory variables in forecasting models. The results strongly indicate that optimal forms involving several commercial forecasts will not be very useful either. Thus Ashley's (1983) sufficient condition retains its value in gauging the usefulness of a forecast of an explanatory variable in a forecasting model, even though it focuses on direct replacement.  相似文献   
79.
This paper considers the implementation of a mean-reverting interest rate model with Markov-modulated parameters. Hidden Markov model filtering techniques in Elliott (1994, Automatica, 30:1399–1408) and Elliott et al. (1995, Hidden Markov Models: Estimation and Control. Springer, New York) are employed to obtain optimal estimates of the model parameters via recursive filters of auxiliary quantities of the observation process. Algorithms are developed and implemented on a financial dataset of 30-day Canadian Treasury bill yields. We also provide standard errors for the model parameter estimates. Our analysis shows that within the dataset and period studied, a model with two regimes is sufficient to describe the interest rate dynamics on the basis of very small prediction errors and the Akaike information criterion.  相似文献   
80.
Conjoint choice experiments have become a powerful tool to explore individual preferences. The consistency of respondents' choices depends on the choice complexity. For example, it is easier to make a choice between two alternatives with few attributes than between five alternatives with several attributes. In the latter case it will be much harder to choose the preferred alternative which is reflected in a higher response error. Several authors have dealt with this choice complexity in the estimation stage but very little attention has been paid to set up designs that take this complexity into account. The core issue of this paper is to find out whether it is worthwhile to take this complexity into account in the design stage. We construct efficient semi-Bayesian D-optimal designs for the heteroscedastic conditional logit model which is used to model the across respondent variability that occurs due to the choice complexity. The degree of complexity is measured by the entropy, as suggested by Swait and Adamowicz (2001). The proposed designs are compared with a semi-Bayesian D-optimal design constructed without taking the complexity into account. The simulation study shows that it is much better to take the choice complexity into account when constructing conjoint choice experiments.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号