全文获取类型
收费全文 | 1599篇 |
免费 | 26篇 |
国内免费 | 3篇 |
专业分类
管理学 | 103篇 |
人口学 | 5篇 |
丛书文集 | 2篇 |
理论方法论 | 1篇 |
综合类 | 122篇 |
社会学 | 3篇 |
统计学 | 1392篇 |
出版年
2023年 | 2篇 |
2022年 | 4篇 |
2021年 | 3篇 |
2020年 | 18篇 |
2019年 | 35篇 |
2018年 | 56篇 |
2017年 | 109篇 |
2016年 | 24篇 |
2015年 | 29篇 |
2014年 | 54篇 |
2013年 | 562篇 |
2012年 | 109篇 |
2011年 | 36篇 |
2010年 | 40篇 |
2009年 | 55篇 |
2008年 | 45篇 |
2007年 | 39篇 |
2006年 | 22篇 |
2005年 | 35篇 |
2004年 | 31篇 |
2003年 | 21篇 |
2002年 | 34篇 |
2001年 | 31篇 |
2000年 | 16篇 |
1999年 | 30篇 |
1998年 | 24篇 |
1997年 | 25篇 |
1996年 | 11篇 |
1995年 | 10篇 |
1994年 | 13篇 |
1993年 | 12篇 |
1992年 | 18篇 |
1991年 | 10篇 |
1990年 | 8篇 |
1989年 | 4篇 |
1988年 | 4篇 |
1987年 | 6篇 |
1986年 | 1篇 |
1985年 | 4篇 |
1984年 | 10篇 |
1983年 | 8篇 |
1982年 | 5篇 |
1981年 | 8篇 |
1980年 | 4篇 |
1979年 | 1篇 |
1978年 | 2篇 |
排序方式: 共有1628条查询结果,搜索用时 296 毫秒
41.
文章选取随机变量为系统的随机变量研究含有随机参数混沌系统的Hopf分岔,利用Chebyshev正交多项式逼近理论将含有随机变量的系统转化为等价的确定性系统,通过Hopf分岔定理和Lyapunov系数讨论了随机参数系统的Hopf分岔及稳定性,发现随机系统的渐进稳定性参数区间大小不仅和确定性参数有关,还与随机参数有非常密切的关系. 相似文献
42.
We investigate empirical likelihood for the additive hazards model with current status data. An empirical log-likelihood ratio for a vector or subvector of regression parameters is defined and its limiting distribution is shown to be a standard chi-squared distribution. The proposed inference procedure enables us to make empirical likelihood-based inference for the regression parameters. Finite sample performance of the proposed method is assessed in simulation studies to compare with that of a normal approximation method, it shows that the empirical likelihood method provides more accurate inference than the normal approximation method. A real data example is used for illustration. 相似文献
43.
Maria Iannario 《统计学通讯:模拟与计算》2016,45(5):1621-1635
A practical problem with large-scale survey data is the possible presence of overdispersion. It occurs when the data display more variability than is predicted by the variance–mean relationship. This article describes a probability distribution generated by a mixture of discrete random variables to capture uncertainty, feeling, and overdispersion. Specifically, several tests for detecting overdispersion will be implemented on the basis of the asymptotic theory for maximum likelihood estimators. We discuss the results of a simulation experiment concerning log-likelihood ratio, Wald, Score, and Profile tests. Finally, some real datasets are analyzed to illustrate the previous results. 相似文献
44.
Data envelopment analysis (DEA) and free disposal hull (FDH) estimators are widely used to estimate efficiency of production. Practitioners use DEA estimators far more frequently than FDH estimators, implicitly assuming that production sets are convex. Moreover, use of the constant returns to scale (CRS) version of the DEA estimator requires an assumption of CRS. Although bootstrap methods have been developed for making inference about the efficiencies of individual units, until now no methods exist for making consistent inference about differences in mean efficiency across groups of producers or for testing hypotheses about model structure such as returns to scale or convexity of the production set. We use central limit theorem results from our previous work to develop additional theoretical results permitting consistent tests of model structure and provide Monte Carlo evidence on the performance of the tests in terms of size and power. In addition, the variable returns to scale version of the DEA estimator is proved to attain the faster convergence rate of the CRS-DEA estimator under CRS. Using a sample of U.S. commercial banks, we test and reject convexity of the production set, calling into question results from numerous banking studies that have imposed convexity assumptions. Supplementary materials for this article are available online. 相似文献
45.
Frédéric Ferraty 《Econometric Reviews》2016,35(2):263-292
We estimate two well-known risk measures, the value-at-risk (VAR) and the expected shortfall, conditionally to a functional variable (i.e., a random variable valued in some semi(pseudo)-metric space). We use nonparametric kernel estimation for constructing estimators of these quantities, under general dependence conditions. Theoretical properties are stated whereas practical aspects are illustrated on simulated data: nonlinear functional and GARCH(1,1) models. Some ideas on bandwidth selection using bootstrap are introduced. Finally, an empirical example is given through data of the S&P 500 time series. 相似文献
46.
This article considers a nonparametric additive seemingly unrelated regression model with autoregressive errors, and develops estimation and inference procedures for this model. Our proposed method first estimates the unknown functions by combining polynomial spline series approximations with least squares, and then uses the fitted residuals together with the smoothly clipped absolute deviation (SCAD) penalty to identify the error structure and estimate the unknown autoregressive coefficients. Based on the polynomial spline series estimator and the fitted error structure, a two-stage local polynomial improved estimator for the unknown functions of the mean is further developed. Our procedure applies a prewhitening transformation of the dependent variable, and also takes into account the contemporaneous correlations across equations. We show that the resulting estimator possesses an oracle property, and is asymptotically more efficient than estimators that neglect the autocorrelation and/or contemporaneous correlations of errors. We investigate the small sample properties of the proposed procedure in a simulation study. 相似文献
47.
48.
The authors show how saddlepoint techniques lead to highly accurate approximations for Bayesian predictive densities and cumulative distribution functions in stochastic model settings where the prior is tractable, but not necessarily the likelihood or the predictand distribution. They consider more specifically models involving predictions associated with waiting times for semi‐Markov processes whose distributions are indexed by an unknown parameter θ. Bayesian prediction for such processes when they are not stationary is also addressed and the inverse‐Gaussian based saddlepoint approximation of Wood, Booth & Butler (1993) is shown to accurately deal with the nonstationarity whereas the normal‐based Lugannani & Rice (1980) approximation cannot, Their methods are illustrated by predicting various waiting times associated with M/M/q and M/G/1 queues. They also discuss modifications to the matrix renewal theory needed for computing the moment generating functions that are used in the saddlepoint methods. 相似文献
49.
M. Jamshidian & R. I. Jennrich 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(2):257-270
The EM algorithm is a popular method for computing maximum likelihood estimates. One of its drawbacks is that it does not produce standard errors as a by-product. We consider obtaining standard errors by numerical differentiation. Two approaches are considered. The first differentiates the Fisher score vector to yield the Hessian of the log-likelihood. The second differentiates the EM operator and uses an identity that relates its derivative to the Hessian of the log-likelihood. The well-known SEM algorithm uses the second approach. We consider three additional algorithms: one that uses the first approach and two that use the second. We evaluate the complexity and precision of these three and the SEM in algorithm seven examples. The first is a single-parameter example used to give insight. The others are three examples in each of two areas of EM application: Poisson mixture models and the estimation of covariance from incomplete data. The examples show that there are algorithms that are much simpler and more accurate than the SEM algorithm. Hopefully their simplicity will increase the availability of standard error estimates in EM applications. It is shown that, as previously conjectured, a symmetry diagnostic can accurately estimate errors arising from numerical differentiation. Some issues related to the speed of the EM algorithm and algorithms that differentiate the EM operator are identified. 相似文献
50.
The main objective of this work is to evaluate the performance of confidence intervals, built using the deviance statistic, for the hyperparameters of state space models. The first procedure is a marginal approximation to confidence regions, based on the likelihood test, and the second one is based on the signed root deviance profile. Those methods are computationally efficient and are not affected by problems such as intervals with limits outside the parameter space, which can be the case when the focus is on the variances of the errors. The procedures are compared to the usual approaches existing in the literature, which includes the method based on the asymptotic distribution of the maximum likelihood estimator, as well as bootstrap confidence intervals. The comparison is performed via a Monte Carlo study, in order to establish empirically the advantages and disadvantages of each method. The results show that the methods based on the deviance statistic possess a better coverage rate than the asymptotic and bootstrap procedures. 相似文献