首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2047篇
  免费   12篇
  国内免费   5篇
管理学   188篇
民族学   16篇
人口学   52篇
丛书文集   15篇
理论方法论   67篇
综合类   151篇
社会学   353篇
统计学   1222篇
  2024年   5篇
  2023年   45篇
  2022年   15篇
  2021年   29篇
  2020年   57篇
  2019年   76篇
  2018年   163篇
  2017年   249篇
  2016年   105篇
  2015年   91篇
  2014年   84篇
  2013年   585篇
  2012年   192篇
  2011年   47篇
  2010年   34篇
  2009年   29篇
  2008年   32篇
  2007年   30篇
  2006年   25篇
  2005年   32篇
  2004年   19篇
  2003年   7篇
  2002年   14篇
  2001年   10篇
  2000年   3篇
  1999年   9篇
  1998年   4篇
  1997年   4篇
  1996年   6篇
  1995年   2篇
  1994年   3篇
  1993年   1篇
  1992年   2篇
  1990年   2篇
  1988年   1篇
  1987年   1篇
  1985年   11篇
  1984年   10篇
  1983年   5篇
  1982年   4篇
  1981年   11篇
  1980年   7篇
  1979年   1篇
  1978年   2篇
排序方式: 共有2064条查询结果,搜索用时 15 毫秒
71.
Optimality of experimental designs for spatially correlated observations is investigated.come two dimensional correlation structures are discussed and an attempt has been made to find optimal or nearly optimal design for each sitution.The solution lend to designs similar to that used for repeated measurements.The relative efficiency of the proposed designs in comparison to randomized latin square designs is tabulated for some cases.  相似文献   
72.
In recent years, with the availability of high-frequency financial market data modeling realized volatility has become a new and innovative research direction. The construction of “observable” or realized volatility series from intra-day transaction data and the use of standard time-series techniques has lead to promising strategies for modeling and predicting (daily) volatility. In this article, we show that the residuals of commonly used time-series models for realized volatility and logarithmic realized variance exhibit non-Gaussianity and volatility clustering. We propose extensions to explicitly account for these properties and assess their relevance for modeling and forecasting realized volatility. In an empirical application for S&P 500 index futures we show that allowing for time-varying volatility of realized volatility and logarithmic realized variance substantially improves the fit as well as predictive performance. Furthermore, the distributional assumption for residuals plays a crucial role in density forecasting.  相似文献   
73.
ABSTRACT

Area statistics are sample versions of areas occurring in a probability plot of two distribution functions F and G. This paper presents a unified basis for five statistics of this type. They can be used for various testing problems in the framework of the two sample problem for independent observations, such as testing equality of distributions against inequality or testing stochastic dominance of distributions in one or either direction against nondominance. Though three of the statistics considered have already been suggested in literature, two of them are new and deserve our interest. The finite sample distributions of the statistics (under F=G) can be calculated via recursion formulae. Two tables with critical values of the new statistics are included. The asymptotic distribution of the properly normalized versions of the area statistics are functionals of the Brownian bridge. The distribution functions and quantiles thereof are obtained by Monte Carlo simulation. Finally, the power functions of the two new tests based on area statistics are compared to the power functions of the tests based on the corresponding supremum statistics, i.e., statistics of the Kolmogorov–Smirnov type.  相似文献   
74.
Abstract

In a quantitative linear model with errors following a stationary Gaussian, first-order autoregressive or AR(1) process, Generalized Least Squares (GLS) on raw data and Ordinary Least Squares (OLS) on prewhitened data are efficient methods of estimation of the slope parameters when the autocorrelation parameter of the error AR(1) process, ρ, is known. In practice, ρ is generally unknown. In the so-called two-stage estimation procedures, ρ is then estimated first before using the estimate of ρ to transform the data and estimate the slope parameters by OLS on the transformed data. Different estimators of ρ have been considered in previous studies. In this article, we study nine two-stage estimation procedures for their efficiency in estimating the slope parameters. Six of them (i.e., three noniterative, three iterative) are based on three estimators of ρ that have been considered previously. Two more (i.e., one noniterative, one iterative) are based on a new estimator of ρ that we propose: it is provided by the sample autocorrelation coefficient of the OLS residuals at lag 1, denoted r(1). Lastly, REstricted Maximum Likelihood (REML) represents a different type of two-stage estimation procedure whose efficiency has not been compared to the others yet. We also study the validity of the testing procedures derived from GLS and the nine two-stage estimation procedures. Efficiency and validity are analyzed in a Monte Carlo study. Three types of explanatory variable x in a simple quantitative linear model with AR(1) errors are considered in the time domain: Case 1, x is fixed; Case 2, x is purely random; and Case 3, x follows an AR(1) process with the same autocorrelation parameter value as the error AR(1) process. In a preliminary step, the number of inadmissible estimates and the efficiency of the different estimators of ρ are compared empirically, whereas their approximate expected value in finite samples and their asymptotic variance are derived theoretically. Thereafter, the efficiency of the estimation procedures and the validity of the derived testing procedures are discussed in terms of the sample size and the magnitude and sign of ρ. The noniterative two-stage estimation procedure based on the new estimator of ρ is shown to be more efficient for moderate values of ρ at small sample sizes. With the exception of small sample sizes, REML and its derived F-test perform the best overall. The asymptotic equivalence of two-stage estimation procedures, besides REML, is observed empirically. Differences related to the nature, fixed or random (uncorrelated or autocorrelated), of the explanatory variable are also discussed.  相似文献   
75.
ABSTRACT

The search for optimal non-parametric estimates of the cumulative distribution and hazard functions under order constraints inspired at least two earlier classic papers in mathematical statistics: those of Kiefer and Wolfowitz[1] Kiefer, J. and Wolfowitz, J. 1976. Asymptotically Minimax Estimation of Concave and Convex Distribution Functions. Z. Wahrsch. Verw. Gebiete, 34: 7385. [Crossref], [Web of Science ®] [Google Scholar] and Grenander[2] Grenander, U. 1956. On the Theory of Mortality Measurement. Part II. Scand. Aktuarietidskrift J., 39: 125153.  [Google Scholar] respectively. In both cases, either the greatest convex minorant or the least concave majorant played a fundamental role. Based on Kiefer and Wolfowitz's work, Wang3-4 Wang, J.L. 1986. Asymptotically Minimax Estimators for Distributions with Increasing Failure Rate. Ann. Statist., 14: 11131131. Wang, J.L. 1987. Estimators of a Distribution Function with Increasing Failure Rate Average. J. Statist. Plann. Inference, 16: 415427.   found asymptotically minimax estimates of the distribution function F and its cumulative hazard function Λ in the class of all increasing failure rate (IFR) and all increasing failure rate average (IFRA) distributions. In this paper, we will prove limit theorems which extend Wang's asymptotic results to the mixed censorship/truncation model as well as provide some other relevant results. The methods are illustrated on the Channing House data, originally analysed by Hyde.5-6 Hyde, J. 1977. Testing Survival Under Right Censoring and Left Truncation. Biometrika, 64: 225230. Hyde, J. 1980. “Survival Analysis with Incomplete Observations”. In Biostatistics Casebook 3146. New York: Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics.    相似文献   
76.
Abstract

The efficacy and the asymptotic relative efficiency (ARE) of a weighted sum of Kendall's taus, a weighted sum of Spearman's rhos, a weighted sum of Pearson's r's, and a weighted sum of z-transformation of the Fisher–Yates correlation coefficients, in the presence of a blocking variable, are discussed. The method of selecting the weighting constants that maximize the efficacy of these four correlation coefficients is proposed. The estimate, test statistics and confidence interval of the four correlation coefficients with weights are also developed. To compare the small-sample properties of the four tests, a simulation study is performed. The theoretical and simulated results all prefer the weighted sum of the Pearson correlation coefficients with the optimal weights, as well as the weighted sum of z-transformation of the Fisher–Yates correlation coefficients with the optimal weights.  相似文献   
77.
《随机性模型》2013,29(1):61-92
We study sojourn times of customers in a processor sharing queue with a service rate that varies over time, depending on the number of customers and on the state of a random environment. An explicit expression is derived for the Laplace–Stieltjes transform of the sojourn time conditional on the state upon arrival and the amount of work brought into the system. Particular attention is paid to the conditional mean sojourn time of a customer as a function of his required amount of work, and we establish the existence of an asymptote as the amount of work tends to infinity. The method of random time change is then extended to include the possibility of a varying service rate. By means of this method, we explain the well-established proportionality between the conditional mean sojourn time and required amount of work in processor sharing queues without random environment. Based on numerical experiments, we propose an approximation for the conditional mean sojourn time. Although first presented for exponentially distributed service requirements, the analysis is shown to extend to phase-type services. The service discipline of discriminatory processor sharing is also shown to fall within the framework.  相似文献   
78.
《随机性模型》2013,29(2):193-227
The Double Chain Markov Model is a fully Markovian model for the representation of time-series in random environments. In this article, we show that it can handle transitions of high-order between both a set of observations and a set of hidden states. In order to reduce the number of parameters, each transition matrix can be replaced by a Mixture Transition Distribution model. We provide a complete derivation of the algorithms needed to compute the model. Three applications, the analysis of a sequence of DNA, the song of the wood pewee, and the behavior of young monkeys show that this model is of great interest for the representation of data that can be decomposed into a finite set of patterns.  相似文献   
79.
《随机性模型》2013,29(2):173-191
Abstract

We propose a new approximation formula for the waiting time tail probability of the M/G/1 queue with FIFO discipline and unlimited waiting space. The aim is to address the difficulty of obtaining good estimates when the tail probability has non-exponential asymptotics. We show that the waiting time tail probability can be expressed in terms of the waiting time tail probability of a notional M/G/1 queue with truncated service time distribution plus the tail probability of an extreme order statistic. The Cramér–Lundberg approximation is applied to approximate the tail probability of the notional queue. In essence, our technique extends the applicability of the Cramér–Lundberg approximation to cases where the standard Lundberg condition does not hold. We propose a simple moment-based technique for estimating the parameters of the approximation; numerical results demonstrate that our approximation can yield very good estimates over the whole range of the argument.  相似文献   
80.
This article presents a new Qual VAR model for incorporating information from qualitative and/or discrete variables in vector autoregressions. With a Qual VAR, it is possible to create dynamic forecasts of the qualitative variable using standard VAR projections. Previous forecasting methods for qualitative variables, in contrast, produce only static forecasts. I apply the Qual VAR to forecasting the 2001 business recession out of sample and to analyzing the Romer and Romer narrative measure of monetary policy contractions as an endogenous variable in a VAR. Out of sample, the model predicts the timing of the 2001 recession quite well relative to the recession probabilities put forth at the time by professional forecasters. Qual VARs—which include information about the qualitative variable—can also enhance the quality of density forecasts of the other variables in the system.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号