首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   346篇
  免费   22篇
管理学   28篇
人口学   12篇
丛书文集   4篇
理论方法论   4篇
综合类   35篇
社会学   10篇
统计学   275篇
  2023年   3篇
  2022年   5篇
  2021年   7篇
  2020年   6篇
  2019年   11篇
  2018年   12篇
  2017年   23篇
  2016年   8篇
  2015年   19篇
  2014年   15篇
  2013年   76篇
  2012年   42篇
  2011年   16篇
  2010年   11篇
  2009年   14篇
  2008年   14篇
  2007年   12篇
  2006年   8篇
  2005年   6篇
  2004年   7篇
  2003年   7篇
  2002年   5篇
  2001年   2篇
  2000年   3篇
  1999年   4篇
  1998年   4篇
  1997年   4篇
  1996年   2篇
  1995年   1篇
  1994年   3篇
  1993年   1篇
  1991年   1篇
  1990年   3篇
  1989年   2篇
  1986年   1篇
  1985年   2篇
  1983年   1篇
  1982年   1篇
  1981年   4篇
  1980年   1篇
  1978年   1篇
排序方式: 共有368条查询结果,搜索用时 15 毫秒
11.
Summary.  The problem motivating the paper is the determination of sample size in clinical trials under normal likelihoods and at the substantive testing stage of a financial audit where normality is not an appropriate assumption. A combination of analytical and simulation-based techniques within the Bayesian framework is proposed. The framework accommodates two different prior distributions: one is the general purpose fitting prior distribution that is used in Bayesian analysis and the other is the expert subjective prior distribution, the sampling prior which is believed to generate the parameter values which in turn generate the data. We obtain many theoretical results and one key result is that typical non-informative prior distributions lead to very small sample sizes. In contrast, a very informative prior distribution may either lead to a very small or a very large sample size depending on the location of the centre of the prior distribution and the hypothesized value of the parameter. The methods that are developed are quite general and can be applied to other sample size determination problems. Some numerical illustrations which bring out many other aspects of the optimum sample size are given.  相似文献   
12.
Summary.  As a part of the EUREDIT project new methods to detect multivariate outliers in incomplete survey data have been developed. These methods are the first to work with sampling weights and to be able to cope with missing values. Two of these methods are presented here. The epidemic algorithm simulates the propagation of a disease through a population and uses extreme infection times to find outlying observations. Transformed rank correlations are robust estimates of the centre and the scatter of the data. They use a geometric transformation that is based on the rank correlation matrix. The estimates are used to define a Mahalanobis distance that reveals outliers. The two methods are applied to a small data set and to one of the evaluation data sets of the EUREDIT project.  相似文献   
13.
The sampling designs dependent on sample moments of auxiliary variables are well known. Lahiri (Bull Int Stat Inst 33:133–140, 1951) considered a sampling design proportionate to a sample mean of an auxiliary variable. Sing and Srivastava (Biometrika 67(1):205–209, 1980) proposed the sampling design proportionate to a sample variance while Wywiał (J Indian Stat Assoc 37:73–87, 1999) a sampling design proportionate to a sample generalized variance of auxiliary variables. Some other sampling designs dependent on moments of an auxiliary variable were considered e.g. in Wywiał (Some contributions to multivariate methods in, survey sampling. Katowice University of Economics, Katowice, 2003a); Stat Transit 4(5):779–798, 2000) where accuracy of some sampling strategies were compared, too.These sampling designs cannot be useful in the case when there are some censored observations of the auxiliary variable. Moreover, they can be much too sensitive to outliers observations. In these cases the sampling design proportionate to the order statistic of an auxiliary variable can be more useful. That is why such an unequal probability sampling design is proposed here. Its particular cases as well as its conditional version are considered, too. The sampling scheme implementing this sampling design is proposed. The inclusion probabilities of the first and second orders were evaluated. The well known Horvitz–Thompson estimator is taken into account. A ratio estimator dependent on an order statistic is constructed. It is similar to the well known ratio estimator based on the population and sample means. Moreover, it is an unbiased estimator of the population mean when the sample is drawn according to the proposed sampling design dependent on the appropriate order statistic.  相似文献   
14.
《九章算术》中的统计学思想探究   总被引:1,自引:0,他引:1  
邢莉 《统计研究》2008,25(3):102-105
《九章算术》是中国历史上最著名的一部数学经典,该书以应用问题求解的形式为成书体例,而这些问题基本都与社会经济有关,因此书中蕴涵了丰富的统计学思想。本文从统计范围和统计思想的角度对书中的统计知识进行了分析,探讨了书中诸如统计分组、抽样推断、线性回归分析及比例关系等统计理论,并据此对当时中国古代统计思想的发展状况进行了研究。  相似文献   
15.
    
In an adaptive clinical trial set up, there exist some adaptive designs to assign an incoming individual to a treatment so that more study subjects are assigned to the better treatment. These designs are however developed under the assumption that an individual patient provides a single response. In practice, there are situations where an individual assigned to a treatment may be required to provide a repeated number of responses over a period of time. Recently, Sutradhar et al. [Sutradhar, B.C., Biswas, A., and Bari, W., 2005, Marginal regression for binary longitudinal data in adaptive clinical trials. Scandinavian Journal of Statistics, 32, 93–113.] have proposed a simple longitudinal play-the-winner (SLPW) design as a generalization of the existing simple play-the-winner (SPW) design, in order to assign an incoming individual to a better treatment, under the binary longitudinal set up. In this paper, we deal with the longitudinal count responses and examine, through a simulation study, the performances of the SLPW design and a new bivariate random walk type design in allocating an individual patient to the better treatment group. As far as the estimation of the parameters is concerned, we examine the performance of a weighted generalized quasi-likelihood approach in estimating the parameters of the longitudinal model including the treatment effects.  相似文献   
16.
    
In the growing literature of factor analysis, little is done to understand the finite sample properties of an approximate factor model solution. In empirical applications with relatively small samples, the asymptotic theory might be a poor approximation and the resulting distortions might affect the estimation (bias in the point estimate and the standard errors) and the statistical inference. The present paper uses the estimation method of Bai and Ng [Bai, J. and Ng, S., 2002, Determining the number of factors in approximate factor models. Econometrica, 70, 191–221.] and assesses the sampling behavior of the estimated common components, common factors and factor loadings. The study compares the empirical distributions to the asymptotic theory of Bai [Bai, J., 2003, Inference on factor models of large dimension. Econometrica, 71, 135–171.]. Simulation results suggest that the point estimates have a Gaussian distribution for panels with relatively small dimensions. However, these estimates have a significant finite sample bias and the dispersion of their sampling distribution is severely underestimated by the asymptotic theory.  相似文献   
17.
    
The most common method of estimating the parameters of a vector-valued autoregressive time series model is the method of least squares (LS). However, since LS estimates are sensitive to the presence of outliers, more robust techniques are often useful. This paper investigates one such technique, weighted-L 1 estimates. Following traditional methods of proof, asymptotic uniform linearity and asymptotic uniform quadricity results are established. Additionally, the gradient of the objective function is shown to be asymptotically normal. These results imply that the weighted-L 1 parameter estimates for this model are asymptotically normal at rate n −1/2. The results rely heavily on covariance inequalities for geometric absolutely regular processes and a Martingale central limit theorem. Estimates for the asymptotic variance–covariance matrix are also discussed. A finite-sample efficiency study is presented to examine the performance of the weighted-L 1 estimate in the presence of both innovation and additive outliers. Specifically, the classical LS estimate is compared with three versions of the weighted- L 1 estimate. Finally, a quadravariate financial time series is used to demonstrate the estimation procedure. A brief residual analysis is also presented.  相似文献   
18.
    
The global minimum variance portfolio (GMVP) is the starting point of the Markowitz mean-variance efficient frontier. The estimation of the GMVP weights is therefore of much importance for financial investors. The GMVP weights depend only on the inverse covariance matrix of returns on financial risky assets, for this reason the estimated GMVP weights are subject to substantial estimation risk, especially in high-dimensional portfolio settings. In this paper we review the recent literature on traditional sample estimators for the unconditional GMVP weights which are typically based on daily asset returns, as well as on modern realized estimators for the conditional GMVP weights based on intraday high-frequency returns. We present various types of GMVP estimators with the corresponding stochastic results, discuss statistical tests and point on some directions for further research. Our empirical application illustrates selected properties of realized GMVP weights. This article is categorized under:
  • Statistical and Graphical Methods of Data Analysis > Multivariate Analysis
  • Statistical and Graphical Methods of Data Analysis > Analysis of High Dimensional Data
  • Statistical Models > Multivariate Models
  相似文献   
19.
The MG-procedure in ranked set sampling is studied in this paper. It is shown that the MG-procedure with any selective probability matrix provides a more efficient estimator than the sample mean based on simple random sampling. The optimum selective probability matrix in the procedure is obtained and the estimator based on it is shown to be more efficient than that studied by Yanagawa and Shirahata [5]. The median-mean estimator, which is more efficient and could be easier to apply than that proposed by McIntyre [2] and Takahashi and Wakinoto [3], is proposed when the underlying distribution function belongs to a certain subfamily of symmetric distribution functions which includes the normal, logistic and double exponential distributions among others.  相似文献   
20.
Employing certain generalized random permutation models and a general class of linear estimators of a finite population mean, it is shown that many of the conventional estimators are “optimal” in the sense of minimum average mean square error. Simple proofs are provided by using a well-known theorem on UMV estimation. The results also cover certain simple response error situations.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号