首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3809篇
  免费   95篇
  国内免费   8篇
管理学   114篇
民族学   1篇
人口学   46篇
丛书文集   27篇
理论方法论   31篇
综合类   346篇
社会学   31篇
统计学   3316篇
  2023年   17篇
  2022年   13篇
  2021年   24篇
  2020年   68篇
  2019年   107篇
  2018年   148篇
  2017年   244篇
  2016年   106篇
  2015年   96篇
  2014年   131篇
  2013年   1343篇
  2012年   312篇
  2011年   102篇
  2010年   111篇
  2009年   95篇
  2008年   114篇
  2007年   91篇
  2006年   80篇
  2005年   84篇
  2004年   68篇
  2003年   66篇
  2002年   58篇
  2001年   50篇
  2000年   38篇
  1999年   52篇
  1998年   56篇
  1997年   31篇
  1996年   24篇
  1995年   19篇
  1994年   11篇
  1993年   9篇
  1992年   15篇
  1991年   10篇
  1990年   16篇
  1989年   9篇
  1988年   19篇
  1987年   6篇
  1986年   7篇
  1985年   13篇
  1984年   6篇
  1983年   15篇
  1982年   3篇
  1981年   3篇
  1980年   5篇
  1979年   4篇
  1978年   4篇
  1977年   3篇
  1976年   3篇
  1975年   2篇
  1973年   1篇
排序方式: 共有3912条查询结果,搜索用时 15 毫秒
41.
Local linear curve estimators are typically constructed using a compactly supported kernel, which minimizes edge effects and (in the case of the Epanechnikov kernel) optimizes asymptotic performance in a mean square sense. The use of compactly supported kernels can produce numerical problems, however. A common remedy is ridging, which may be viewed as shrinkage of the local linear estimator towards the origin. In this paper we propose a general form of shrinkage, and suggest that, in practice, shrinkage be towards a proper curve estimator. For the latter we propose a local linear estimator based on an infinitely supported kernel. This approach is resistant against selection of too large a shrinkage parameter, which can impair performance when shrinkage is towards the origin. It also removes problems of numerical instability resulting from using a compactly supported kernel, and enjoys very good mean squared error properties.  相似文献   
42.
基于方差分析的资本结构决策模型   总被引:2,自引:0,他引:2  
本文利用统计学原理与风险决策分析方法,提出了资本结构决策新的方差分析方法和矩阵模型.与原有的基于概率分析的方法比较,本文建立的模型方法克服和舍弃了模糊性和难操作性,有效地权衡了风险与收益,使资本结构决策模型更富可操作性和广泛适用性,能够为企业最优资本结构决策提供一定的决策依据.  相似文献   
43.
We consider testing inference in inflated beta regressions subject to model misspecification. In particular, quasi-z tests based on sandwich covariance matrix estimators are described and their finite sample behavior is investigated via Monte Carlo simulations. The numerical evidence shows that quasi-z testing inference can be considerably more accurate than inference made through the usual z tests, especially when there is model misspecification. Interval estimation is also considered. We also present an empirical application that uses real (not simulated) data.  相似文献   
44.
We develop four asymptotic interval estimators and one exact interval estimator for the odds ratio (OR) under stratified random sampling with matched pairs. We apply Monte Carlo simulation to evaluate the performance of these five interval estimators. We note that the conditional score test-based interval estimator with a monotonic transformation and the interval estimator based on the Mantel–Haenszel (MH) type point estimator with the logarithmic transformation are generally preferable to the others considered here. We also note that the conditional exact confidence interval can be of use when the total number of matched pairs with discordant responses is small.  相似文献   
45.
We consider a class of dependent Bernoulli variables where the conditional success probability is a linear combination of the last few trials and the original success probability. We obtain its limit theorems including the strong law of large numbers, weak invariance principle, and law of the iterated logarithm. We also derive some statistical inference results which make the model applicable. Simulation results are exhibited as well to show that with small sample size the convergence rate is satisfying and the proposed estimators behave well.  相似文献   
46.
The problem of constructing control charts for fuzzy data has been considered in literature. The proposed transformation approaches and direct fuzzy approaches have their advantages and disadvantages. The representative values charts based on transformation methods are often recommended in practical application. For representing a fuzzy set by a crisp value, the weight of importance of the members assigned with some membership levels in a fuzzy set should be considered, and the possibility theory can be employed to deal with such problem. In this article, we propose to employ the weighted possibilistic mean (WPM), weighted interval valued possibilistic mean (WIVPM) of fuzzy number as a sort of representative values for the fuzzy attribute data, and establish new fuzzy control charts with WPM and WIVPM. The performance of the charts is compared to the existing fuzzy charts with a fuzzy c-chart example via newly defined average number of inspection for variation of control state.  相似文献   
47.
This paper considers the design of accelerated life test (ALT) sampling plans under Type I progressive interval censoring with random removals. We assume that the lifetime of products follows a Weibull distribution. Two levels of constant stress higher than the use condition are used. The sample size and the acceptability constant that satisfy given levels of producer's risk and consumer's risk are found. In particular, the optimal stress level and the allocation proportion are obtained by minimizing the generalized asymptotic variance of the maximum likelihood estimators of the model parameters. Furthermore, for validation purposes, a Monte Carlo simulation is conducted to assess the true probability of acceptance for the derived sampling plans.  相似文献   
48.
Simulations of forest inventory in several populations compared simple random with “quick probability proportional to size” (QPPS) sampling. The latter may be applied in the absence of a list sampling frame and/or prior measurement of the auxiliary variable. The correlation between the auxiliary and target variables required to render QPPS sampling more efficient than simple random sampling varied over the range 0.3–0.6 and was lower when sampling from populations that were skewed to the right. Two possible analytical estimators of the standard error of the estimate of the mean for QPPS sampling were found to be less reliable than bootstrapping.  相似文献   
49.
In this article, we propose an efficient and robust estimation for the semiparametric mixture model that is a mixture of unknown location-shifted symmetric distributions. Our estimation is derived by minimizing the profile Hellinger distance (MPHD) between the model and a nonparametric density estimate. We propose a simple and efficient algorithm to find the proposed MPHD estimation. Monte Carlo simulation study is conducted to examine the finite sample performance of the proposed procedure and to compare it with other existing methods. Based on our empirical studies, the newly proposed procedure works very competitively compared to the existing methods for normal component cases and much better for non-normal component cases. More importantly, the proposed procedure is robust when the data are contaminated with outlying observations. A real data application is also provided to illustrate the proposed estimation procedure.  相似文献   
50.
The marginal likelihood can be notoriously difficult to compute, and particularly so in high-dimensional problems. Chib and Jeliazkov employed the local reversibility of the Metropolis–Hastings algorithm to construct an estimator in models where full conditional densities are not available analytically. The estimator is free of distributional assumptions and is directly linked to the simulation algorithm. However, it generally requires a sequence of reduced Markov chain Monte Carlo runs which makes the method computationally demanding especially in cases when the parameter space is large. In this article, we study the implementation of this estimator on latent variable models which embed independence of the responses to the observables given the latent variables (conditional or local independence). This property is employed in the construction of a multi-block Metropolis-within-Gibbs algorithm that allows to compute the estimator in a single run, regardless of the dimensionality of the parameter space. The counterpart one-block algorithm is also considered here, by pointing out the difference between the two approaches. The paper closes with the illustration of the estimator in simulated and real-life data sets.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号