首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2946篇
  免费   52篇
  国内免费   21篇
管理学   453篇
人口学   19篇
丛书文集   26篇
理论方法论   19篇
综合类   256篇
社会学   20篇
统计学   2226篇
  2024年   6篇
  2023年   16篇
  2022年   25篇
  2021年   27篇
  2020年   39篇
  2019年   99篇
  2018年   104篇
  2017年   185篇
  2016年   72篇
  2015年   65篇
  2014年   110篇
  2013年   655篇
  2012年   313篇
  2011年   96篇
  2010年   85篇
  2009年   84篇
  2008年   112篇
  2007年   111篇
  2006年   91篇
  2005年   99篇
  2004年   78篇
  2003年   71篇
  2002年   60篇
  2001年   58篇
  2000年   49篇
  1999年   58篇
  1998年   60篇
  1997年   27篇
  1996年   28篇
  1995年   25篇
  1994年   23篇
  1993年   11篇
  1992年   17篇
  1991年   12篇
  1990年   5篇
  1989年   7篇
  1988年   6篇
  1987年   1篇
  1986年   4篇
  1985年   4篇
  1984年   7篇
  1983年   2篇
  1982年   5篇
  1981年   3篇
  1979年   1篇
  1978年   1篇
  1977年   1篇
  1975年   1篇
排序方式: 共有3019条查询结果,搜索用时 15 毫秒
31.
32.
Two families of processes: pure jump processes and jump-diffusion processes are widely used in literatures. Recently, empirical findings demonstrate that the underlying processes of high frequency data sets are pure-jump processes of infinite variation in many situations. Statistical tests are also proposed to make the empirical findings theoretically grounded. In this paper, we extend the work of Jing et al. (2012) in two aspects: (1) the jump process in the null hypothesis and the alternative hypothesis could be different; (2) the null hypothesis covers more flexible processes which are more relevant in finance when considering models for asset prices or nominal interest rates. Theoretically, the test is proven to be very powerful and can control the type I error probabilities well under the nominal level.  相似文献   
33.
Troutt (1991,1993) proposed the idea of the vertical density representation (VDR) based on Box-Millar method. Kotz, Fang and Liang (1997) provided a systematic study on the multivariate vertical density representation (MVDR). Suppose that we want to generate a random vector X[d]Rnthat has a density function ?(x). The key point of using the MVDR is to generate the uniform distribution on [D]?(v) = {x :?(x) = v} for any v > 0 which is the surface in RnIn this paper we use the conditional distribution method to generate the uniform distribution on a domain or on some surface and based on it we proposed an alternative version of the MVDR(type 2 MVDR), by which one can transfer the problem of generating a random vector X with given density f to one of generating (X, Xn+i) that follows the uniform distribution on a region in Rn+1defined by ?. Several examples indicate that the proposed method is quite practical.  相似文献   
34.
Log-location-scale distributions are widely used parametric models that have fundamental importance in both parametric and semiparametric frameworks. The likelihood equations based on a Type II censored sample from location-scale distributions do not provide explicit solutions for the para-meters. Statistical software is widely available and is based on iterative methods (such as, Newton Raphson Algorithm, EM algorithm etc.), which require starting values near the global maximum. There are also many situations that the specialized software does not handle. This paper provides a method for determining explicit estimators for the location and scale parameters by approximating the likelihood function, where the method does not require any starting values. The performance of the proposed approximate method for the Weibull distribution and Log-Logistic distributions is compared with those based on iterative methods through the use of simulation studies for a wide range of sample size and Type II censoring schemes. Here we also examine the probability coverages of the pivotal quantities based on asymptotic normality. In addition, two examples are given.  相似文献   
35.
Jin Zhang 《Statistics》2013,47(4):792-799
The Pareto distribution is an important distribution in statistics, which has been widely used in finance, physics, hydrology, geology, astronomy, and so on. Even though the parameter estimation for the Pareto distribution has been well established in the literature, the estimation problem for the truncated Pareto distribution becomes complex. This article investigates the bias and mean-squared error of the maximum-likelihood estimation for the truncated Pareto distribution, and some useful results are obtained.  相似文献   
36.
The good performance of logit confidence intervals for the odds ratio with small samples is well known. This is true unless the actual odds ratio is very large. In single capture–recapture estimation the odds ratio is equal to 1 because of the assumption of independence of the samples. Consequently, a transformation of the logit confidence intervals for the odds ratio is proposed in order to estimate the size of a closed population under single capture–recapture estimation. It is found that the transformed logit interval, after adding .5 to each observed count before computation, has actual coverage probabilities near to the nominal level even for small populations and even for capture probabilities near to 0 or 1, which is not guaranteed for the other capture–recapture confidence intervals proposed in statistical literature. Thus, given that the .5 transformed logit interval is very simple to compute and has a good performance, it is appropriate to be implemented by most users of the single capture–recapture method.  相似文献   
37.
Finding the influence of traffic accident on the road is helpful to analyze the characteristics of traffic flow, and take reasonable and effective control measures. Here, the detrended fluctuation analysis method is applied to investigate the complexity of time series in mixed traffic flow with a blockage induced by an accident. As a parameter to depict the long-term evolutionary behavior of the time series in traffic flow, the scaling exponent is analyzed. According to the scaling exponent, it is shown that the traffic flow time series can display long-range correlation characteristics, short-range correlation characteristics, and non-power-law relation in the long-range correlation characteristics, which is strongly dependent on the entering probability of vehicle, the ratio of slow vehicle and the blockage duration time.  相似文献   
38.
This article develops a new cumulative sum statistic to identify aberrant behavior in a sequentially administered multiple-choice standardized examination. The examination responses can be described as finite Poisson trials, and the statistic can be used for other applications which fit this framework. The standardized examination setting uses a maximum likelihood estimate of examinee ability and an item response theory model. Aberrant and non aberrant probabilities are computed by an odds ratio analogous to risk adjusted CUSUM schemes. The significance level of a hypothesis test, where the null hypothesis is non-aberrant examinee behavior, is computed with Markov chains. A smoothing process is used to spread probabilities across the Markov states. The practicality of the approach to detect aberrant examinee behavior is demonstrated with results from both simulated and empirical data.  相似文献   
39.
Inference in generalized linear mixed models with crossed random effects is often made cumbersome by the high-dimensional intractable integrals involved in the marginal likelihood. This article presents two inferential approaches based on the marginal composite likelihood for the normal Bradley-Terry model. The two approaches are illustrated by a simulation study to evaluate their performance. Thereafter, the asymptotic variances of the estimated variance component are compared.  相似文献   
40.
In the article, it is shown that in panel data models the Hausman test (HT) statistic can be considerably refined using the bootstrap technique. Edgeworth expansion shows that the coverage of the bootstrapped HT is second-order correct.

The asymptotic versus the bootstrapped HT are compared also by Monte Carlo simulations. At the null hypothesis and a nominal size of 0.05, the bootstrapped HT reduces the coverage error of the asymptotic HT by 10–40% of nominal size; for nominal sizes less than or equal to 0.025, the coverage error reduction is between 30% and 80% of nominal size. For the nonnull alternatives, the power of the asymptotic HT fictitiously increases by over 70% of the correct power for nominal sizes less than or equal to 0.025; the bootstrapped HT reduces overrejection to less than one fourth of its value. The advantages of the bootstrapped HT increase with the number of explanatory variables.

Heteroscedasticity or serial correlation in the idiosyncratic part of the error does not hamper advantages of the bootstrapped version of HT, if a heteroscedasticity robust version of the HT and the wild bootstrap are used. But, the power penalty is not negligible if a heteroscedasticity robust approach is used in the homoscedastic panel data model.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号