首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1657篇
  免费   39篇
  国内免费   8篇
管理学   171篇
人口学   3篇
丛书文集   4篇
理论方法论   4篇
综合类   47篇
社会学   10篇
统计学   1465篇
  2024年   2篇
  2023年   3篇
  2022年   9篇
  2021年   5篇
  2020年   18篇
  2019年   64篇
  2018年   63篇
  2017年   114篇
  2016年   48篇
  2015年   33篇
  2014年   43篇
  2013年   388篇
  2012年   193篇
  2011年   45篇
  2010年   31篇
  2009年   43篇
  2008年   53篇
  2007年   64篇
  2006年   51篇
  2005年   59篇
  2004年   47篇
  2003年   36篇
  2002年   26篇
  2001年   37篇
  2000年   35篇
  1999年   38篇
  1998年   40篇
  1997年   23篇
  1996年   16篇
  1995年   15篇
  1994年   17篇
  1993年   6篇
  1992年   12篇
  1991年   8篇
  1990年   1篇
  1989年   3篇
  1988年   3篇
  1986年   2篇
  1985年   1篇
  1984年   3篇
  1983年   1篇
  1982年   1篇
  1981年   1篇
  1978年   1篇
  1977年   1篇
  1975年   1篇
排序方式: 共有1704条查询结果,搜索用时 15 毫秒
991.
In this article, we consider the ranked set sampling (RSS) and investigate seven tests for normality under RSS. Each test is described and then power of each test is obtained by Monte Carlo simulations under various alternatives. Finally, the powers of the tests based on RSS are compared with the powers of the tests based on the simple random sampling and the results are discussed.  相似文献   
992.
Pricing options is an important problem in financial engineering. In many scenarios of practical interest, financial option prices associated with an underlying asset reduces to computing an expectation w.r.t. a diffusion process. In general, these expectations cannot be calculated analytically, and one way to approximate these quantities is via the Monte Carlo (MC) method; MC methods have been used to price options since at least the 1970s. It has been seen in Del Moral P, Shevchenko PV. [Valuation of barrier options using sequential Monte Carlo. 2014. arXiv preprint] and Jasra A, Del Moral P. [Sequential Monte Carlo methods for option pricing. Stoch Anal Appl. 2011;29:292–316] that Sequential Monte Carlo (SMC) methods are a natural tool to apply in this context and can vastly improve over standard MC. In this article, in a similar spirit to Del Moral and Shevchenko (2014) and Jasra and Del Moral (2011), we show that one can achieve significant gains by using SMC methods by constructing a sequence of artificial target densities over time. In particular, we approximate the optimal importance sampling distribution in the SMC algorithm by using a sequence of weighting functions. This is demonstrated on two examples, barrier options and target accrual redemption notes (TARNs). We also provide a proof of unbiasedness of our SMC estimate.  相似文献   
993.
We introduce in this paper, the shrinkage estimation method in the lognormal regression model for censored data involving many predictors, some of which may not have any influence on the response of interest. We develop the asymptotic properties of the shrinkage estimators (SEs) using the notion of asymptotic distributional biases and risks. We show that if the shrinkage dimension exceeds two, the asymptotic risk of the SEs is strictly less than the corresponding classical estimators. Furthermore, we study the penalty (LASSO and adaptive LASSO) estimation methods and compare their relative performance with the SEs. A simulation study for various combinations of the inactive predictors and censoring percentages shows that the SEs perform better than the penalty estimators in certain parts of the parameter space, especially when there are many inactive predictors in the model. It also shows that the shrinkage and penalty estimators outperform the classical estimators. A real-life data example using Worcester heart attack study is used to illustrate the performance of the suggested estimators.  相似文献   
994.
The distribution of the test statistics of homogeneity tests is often unknown, requiring the estimation of the critical values through Monte Carlo (MC) simulations. The computation of the critical values at low α, especially when the distribution of the statistics changes with the series length (sample cardinality), requires a considerable number of simulations to achieve a reasonable precision of the estimates (i.e. 106 simulations or more for each series length). If, in addition, the test requires a noteworthy computational effort, the estimation of the critical values may need unacceptably long runtimes.

To overcome the problem, the paper proposes a regression-based refinement of an initial MC estimate of the critical values, also allowing an approximation of the achieved improvement. Moreover, the paper presents an application of the method to two tests: SNHT (standard normal homogeneity test, widely used in climatology), and SNH2T (a version of SNHT showing a squared numerical complexity). For both, the paper reports the critical values for α ranging between 0.1 and 0.0001 (useful for the p-value estimation), and the series length ranging from 10 (widely adopted size in climatological change-point detection literature) to 70,000 elements (nearly the length of a daily data time series 200 years long), estimated with coefficients of variation within 0.22%. For SNHT, a comparison of our results with approximated, theoretically derived, critical values is also performed; we suggest adopting those values for the series exceeding 70,000 elements.  相似文献   

995.
For two or more populations of which the covariance matrices have a common set of eigenvectors, but different sets of eigenvalues, the common principal components (CPC) model is appropriate. Pepler et al. (2015 Pepler, P. T., Uys, D. W. and Nel, D. G. (2015). Regularised covariance matrix estimation under the common principal components model. Communications in Statistics: Simulation and Computation. (In press). [Google Scholar]) proposed a regularized CPC covariance matrix estimator and showed that this estimator outperforms the unbiased and pooled estimators in situations, where the CPC model is applicable. This article extends their work to the context of discriminant analysis for two groups, by plugging the regularized CPC estimator into the ordinary quadratic discriminant function. Monte Carlo simulation results show that CPC discriminant analysis offers significant improvements in misclassification error rates in certain situations, and at worst performs similar to ordinary quadratic and linear discriminant analysis. Based on these results, CPC discriminant analysis is recommended for situations, where the sample size is small compared to the number of variables, in particular for cases where there is uncertainty about the population covariance matrix structures.  相似文献   
996.
We consider a Bayesian nonignorable model to accommodate a nonignorable selection mechanism for predicting small area proportions. Our main objective is to extend a model on selection bias in a previously published paper, coauthored by four authors, to accommodate small areas. These authors assume that the survey weights (or their reciprocals that we also call selection probabilities) are available, but there is no simple relation between the binary responses and the selection probabilities. To capture the nonignorable selection bias within each area, they assume that the binary responses and the selection probabilities are correlated. To accommodate the small areas, we extend their model to a hierarchical Bayesian nonignorable model and we use Markov chain Monte Carlo methods to fit it. We illustrate our methodology using a numerical example obtained from data on activity limitation in the U.S. National Health Interview Survey. We also perform a simulation study to assess the effect of the correlation between the binary responses and the selection probabilities.  相似文献   
997.
The logistic distribution is one of the fundamental distribution and is widely used for describing model growth curves in survival analysis and biological studies. Applications of this distribution are presented in statistical literature. In this article, goodness of fit tests for the logistic distribution based on the empirical distribution function (EDF) are considered. In order to compute the test statistics, because the MLEs cannot be obtained explicitly, we use the approximate maximum likelihood estimates (AMLEs) suggested by Balakrishnan and Cohen (1990 Balakrishnan, N., Cohen, A. C. (1990). Order Statistics and Inference: Estimation Methods. Boston: Academic Press. [Google Scholar]), which are simple explicit estimators. Power comparisons of the considered tests are carried out via simulations. Finally, two illustrative examples are presented and analyzed.  相似文献   
998.
Most interval estimates are derived from computable conditional distributions conditional on the data. In this article, we call the random variables having such conditional distributions confidence distribution variables and define their finite-sample breakdown values. Based on this, the definition of breakdown value of confidence intervals is introduced, which covers the breakdowns in both the coverage probability and interval length. High-breakdown confidence intervals are constructed by the structural method in location-scale families. Simulation results are presented to compare the traditional confidence intervals and their robust analogues.  相似文献   
999.
Extended zero-one inflated beta and adjusted three-part regression models are introduced to analyze proportional response data where there are nonzero probabilities that the response variable takes the values zero and one. The proposed models adapt skewness and heteroscedasticity of the fractional response data, and are constructed to estimate the unknown parameters. Extensive Monte Carlo simulation studies are used to compare the performance of the two approaches with respect to bias and root mean square error. A real data example is presented to illustrate the application of both regression models.  相似文献   
1000.
In some fields, we are forced to work with missing data in multivariate time series. Unfortunately, the data analysis in this context cannot be carried out in the same way as in the case of complete data. To deal with this problem, a Bayesian analysis of multivariate threshold autoregressive models with exogenous inputs and missing data is carried out. In this paper, Markov chain Monte Carlo methods are used to obtain samples from the involved posterior distributions, including threshold values and missing data. In order to identify autoregressive orders, we adapt the Bayesian variable selection method in this class of multivariate process. The number of regimes is estimated using marginal likelihood or product parameter-space strategies.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号