首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4520篇
  免费   963篇
  国内免费   11篇
管理学   1193篇
民族学   12篇
人口学   59篇
丛书文集   65篇
理论方法论   859篇
综合类   476篇
社会学   1708篇
统计学   1122篇
  2024年   1篇
  2023年   5篇
  2022年   8篇
  2021年   103篇
  2020年   182篇
  2019年   349篇
  2018年   218篇
  2017年   377篇
  2016年   367篇
  2015年   360篇
  2014年   396篇
  2013年   629篇
  2012年   423篇
  2011年   296篇
  2010年   296篇
  2009年   205篇
  2008年   228篇
  2007年   144篇
  2006年   147篇
  2005年   140篇
  2004年   145篇
  2003年   110篇
  2002年   124篇
  2001年   104篇
  2000年   92篇
  1999年   14篇
  1998年   4篇
  1997年   4篇
  1996年   4篇
  1995年   6篇
  1993年   4篇
  1992年   3篇
  1991年   2篇
  1990年   2篇
  1989年   1篇
  1984年   1篇
排序方式: 共有5494条查询结果,搜索用时 609 毫秒
731.
Risks are usually represented and measured by volatility–covolatility matrices. Wishart processes are models for a dynamic analysis of multivariate risk and describe the evolution of stochastic volatility–covolatility matrices, constrained to be symmetric positive definite. The autoregressive Wishart process (WAR) is the multivariate extension of the Cox, Ingersoll, Ross (CIR) process introduced for scalar stochastic volatility. As a CIR process it allows for closed-form solutions for a number of financial problems, such as term structure of T-bonds and corporate bonds, derivative pricing in a multivariate stochastic volatility model, and the structural model for credit risk. Moreover, the Wishart dynamics are very flexible and are serious competitors for less structural multivariate ARCH models.  相似文献   
732.
Marginal changes of interacted variables and interaction terms in random parameters ordered response models are calculated incorrectly in econometric softwares. We derive the correct formulas for calculating these marginal changes. In our empirical example, we observe significant changes not only in the magnitude of the marginal effects but also in their standard errors, suggesting that the incorrect estimation of the marginal effects of these variables as is commonly practiced can render biased inferences on the findings.  相似文献   
733.
A fast and accurate method of confidence interval construction for the smoothing parameter in penalised spline and partially linear models is proposed. The method is akin to a parametric percentile bootstrap where Monte Carlo simulation is replaced by saddlepoint approximation, and can therefore be viewed as an approximate bootstrap. It is applicable in a quite general setting, requiring only that the underlying estimator be the root of an estimating equation that is a quadratic form in normal random variables. This is the case under a variety of optimality criteria such as those commonly denoted by maximum likelihood (ML), restricted ML (REML), generalized cross validation (GCV) and Akaike's information criteria (AIC). Simulation studies reveal that under the ML and REML criteria, the method delivers a near‐exact performance with computational speeds that are an order of magnitude faster than existing exact methods, and two orders of magnitude faster than a classical bootstrap. Perhaps most importantly, the proposed method also offers a computationally feasible alternative when no known exact or asymptotic methods exist, e.g. GCV and AIC. An application is illustrated by applying the methodology to well‐known fossil data. Giving a range of plausible smoothed values in this instance can help answer questions about the statistical significance of apparent features in the data.  相似文献   
734.
In this paper a test statistic which is a modification of the W statistic for testing the goodness of fit for the two paremeter extreme value (smallest element) distribution is proposed. The test statistic Is obtained as the ratio of two linear estimates of the scale parameter. It Is shown that the suggested statistic is computationally simple and has good power properties. Percentage points of the statistic are obtained by performing Monte Carlo experiments. An example is given to illustrate the test procedure.  相似文献   
735.
In this article the problem of the optimal selection and allocation of time points in repeated measures experiments is considered. D‐ optimal designs for linear regression models with a random intercept and first order auto‐regressive serial correlations are computed numerically and compared with designs having equally spaced time points. When the order of the polynomial is known and the serial correlations are not too small, the comparison shows that for any fixed number of repeated measures, a design with equally spaced time points is almost as efficient as the D‐ optimal design. When, however, there is no prior knowledge about the order of the underlying polynomial, the best choice in terms of efficiency is a D‐ optimal design for the highest possible relevant order of the polynomial. A design with equally‐spaced time points is the second best choice  相似文献   
736.
Interpreting data and communicating effectively through graphs and tables are requisite skills for statisticians and non‐statisticians in the pharmaceutical industry. However, the quality of visual displays of data in the medical and pharmaceutical literature and at scientific conferences is severely lacking. We describe an interactive, workshop‐driven, 2‐day short course that we constructed for pharmaceutical research personnel to learn these skills. The examples in the course and the workshop datasets source from our professional experiences, the scientific literature, and the mass media. During the course, the participants are exposed to and gain hands‐on experience with the principles of visual and graphical perception, design, and construction of both graphic and tabular displays of quantitative and qualitative information. After completing the course, with a critical eye, the participants are able to construct, revise, critique, and interpret graphic and tabular displays according to an extensive set of guidelines. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
737.
We use the two‐state Markov regime‐switching model to explain the behaviour of the WTI crude‐oil spot prices from January 1986 to February 2012. We investigated the use of methods based on the composite likelihood and the full likelihood. We found that the composite‐likelihood approach can better capture the general structural changes in world oil prices. The two‐state Markov regime‐switching model based on the composite‐likelihood approach closely depicts the cycles of the two postulated states: fall and rise. These two states persist for on average 8 and 15 months, which matches the observed cycles during the period. According to the fitted model, drops in oil prices are more volatile than rises. We believe that this information can be useful for financial officers working in related areas. The model based on the full‐likelihood approach was less satisfactory. We attribute its failure to the fact that the two‐state Markov regime‐switching model is too rigid and overly simplistic. In comparison, the composite likelihood requires only that the model correctly specifies the joint distribution of two adjacent price changes. Thus, model violations in other areas do not invalidate the results. The Canadian Journal of Statistics 41: 353–367; 2013 © 2013 Statistical Society of Canada  相似文献   
738.
Abstract. We consider N independent stochastic processes (X i (t), t ∈ [0,T i ]), i=1,…, N, defined by a stochastic differential equation with drift term depending on a random variable φ i . The distribution of the random effect φ i depends on unknown parameters which are to be estimated from the continuous observation of the processes Xi. We give the expression of the exact likelihood. When the drift term depends linearly on the random effect φ i and φ i has Gaussian distribution, an explicit formula for the likelihood is obtained. We prove that the maximum likelihood estimator is consistent and asymptotically Gaussian, when T i =T for all i and N tends to infinity. We discuss the case of discrete observations. Estimators are computed on simulated data for several models and show good performances even when the length time interval of observations is not very large.  相似文献   
739.
We propose several new tests for monotonicity of regression functions based on different empirical processes of residuals and pseudo‐residuals. The residuals are obtained from an unconstrained kernel regression estimator whereas the pseudo‐residuals are obtained from an increasing regression estimator. Here, in particular, we consider a recently developed simple kernel‐based estimator for increasing regression functions based on increasing rearrangements of unconstrained non‐parametric estimators. The test statistics are estimated distance measures between the regression function and its increasing rearrangement. We discuss the asymptotic distributions, consistency and small sample performances of the tests.  相似文献   
740.
In a missing data setting, we have a sample in which a vector of explanatory variables ${\bf x}_i$ is observed for every subject i, while scalar responses $y_i$ are missing by happenstance on some individuals. In this work we propose robust estimators of the distribution of the responses assuming missing at random (MAR) data, under a semiparametric regression model. Our approach allows the consistent estimation of any weakly continuous functional of the response's distribution. In particular, strongly consistent estimators of any continuous location functional, such as the median, L‐functionals and M‐functionals, are proposed. A robust fit for the regression model combined with the robust properties of the location functional gives rise to a robust recipe for estimating the location parameter. Robustness is quantified through the breakdown point of the proposed procedure. The asymptotic distribution of the location estimators is also derived. The proofs of the theorems are presented in Supplementary Material available online. The Canadian Journal of Statistics 41: 111–132; 2013 © 2012 Statistical Society of Canada  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号