全文获取类型
收费全文 | 136篇 |
免费 | 1篇 |
专业分类
管理学 | 9篇 |
民族学 | 1篇 |
人口学 | 1篇 |
丛书文集 | 6篇 |
理论方法论 | 2篇 |
综合类 | 31篇 |
社会学 | 5篇 |
统计学 | 82篇 |
出版年
2020年 | 3篇 |
2019年 | 3篇 |
2018年 | 3篇 |
2017年 | 9篇 |
2016年 | 3篇 |
2015年 | 4篇 |
2014年 | 7篇 |
2013年 | 37篇 |
2012年 | 5篇 |
2011年 | 9篇 |
2010年 | 5篇 |
2009年 | 7篇 |
2008年 | 3篇 |
2007年 | 4篇 |
2006年 | 5篇 |
2005年 | 5篇 |
2004年 | 4篇 |
2003年 | 4篇 |
2002年 | 1篇 |
2001年 | 4篇 |
1999年 | 2篇 |
1998年 | 2篇 |
1995年 | 1篇 |
1992年 | 1篇 |
1991年 | 1篇 |
1989年 | 2篇 |
1987年 | 2篇 |
1982年 | 1篇 |
排序方式: 共有137条查询结果,搜索用时 0 毫秒
31.
通过对2006-2008年的138个大股东大宗增、减持公告数据进行实证分析,发现:大股东大宗交易行为能缓解金融危机下投资者羊群行为和过度反应,改善系统性和结构性市场定价效率;股权分置改革后,股权制衡度较高公司的控股股东倾向于增持,说明控制权竞争现象逐渐显现,这有益于外部治理机制改善;外部大股东没有起到被国外实证结果验证的惩罚低效管理的作用.应对大股东,尤其是外部大股东交易行为加强监管,保护中小股东利益. 相似文献
32.
33.
The inverse hypergeometric distribution is of interest in applications of inverse sampling without replacement from a finite population where a binary observation is made on each sampling unit. Thus, sampling is performed by randomly choosing units sequentially one at a time until a specified number of one of the two types is selected for the sample. Assuming the total number of units in the population is known but the number of each type is not, we consider the problem of estimating this parameter. We use the Delta method to develop approximations for the variance of three parameter estimators. We then propose three large sample confidence intervals for the parameter. Based on these results, we selected a sampling of parameter values for the inverse hypergeometric distribution to empirically investigate performance of these estimators. We evaluate their performance in terms of expected probability of parameter coverage and confidence interval length calculated as means of possible outcomes weighted by the appropriate outcome probabilities for each parameter value considered. The unbiased estimator of the parameter is the preferred estimator relative to the maximum likelihood estimator and an estimator based on a negative binomial approximation, as evidenced by empirical estimates of closeness to the true parameter value. Confidence intervals based on the unbiased estimator tend to be shorter than the two competitors because of its relatively small variance but at a slight cost in terms of coverage probability. 相似文献
34.
《Adoption quarterly》2013,16(2):79-88
ABSTRACT The Research Digest column describes the contents, availability and methods of accessing large datasets suitable for adoption research. With data collected by or funded by a variety of federal government agencies, the accessibility of these archival datasets enables researchers to pursue some of their investigative interests without all the problems related to data collection. Most of the data are relatively new, having become available to the public only in the 1980s. Secondary analysis of adoption-related material in these datasets could yield valuable insights into important adoption research questions. 相似文献
35.
Etebong P. Clement 《统计学通讯:理论与方法》2018,47(12):2835-2847
This study proposes a more efficient calibration estimator for estimating population mean in stratified double sampling using new calibration weights. The variance of the proposed calibration estimator has been derived under large sample approximation. Calibration asymptotic optimum estimator and its approximate variance estimator are derived for the proposed calibration estimator and existing calibration estimators in stratified double sampling. Analytical results showed that the proposed calibration estimator is more efficient than existing members of its class in stratified double sampling. Analysis and evaluation are presented. 相似文献
36.
This paper studies hypothesis testing in time inhomogeneous diffusion processes. With the help of large and moderate deviations for the log-likelihood ratio process, we give the negative regions and obtain the decay rates of the error probabilities. Moreover, we apply our results to hypothesis testing in α‐Wiener bridge. 相似文献
37.
Colin L. Mallows 《The American statistician》2013,67(4):179-184
We describe several analyses in which robustness considerations have proved relevant. These examples exhibit the importance of (a) graphic displays as aids to formulating a preliminary model; (b) summary statistics that reduce the influence of outliers, that (c) give added opportunities of detecting relationships, and (d) are not unduly sensitive to granularity in the observations; and (e) techniques that pay due attention to anomalies in the data that superficially may appear to be negligible but that can obscure important effects. Finally, we make some general comments on the advantages and disadvantages of robust methodology. 相似文献
38.
Extensions of some limit theorems are proved for tail probabilities of sums of independent identically distributed random variables satisfying the one-sided or two-sided Cramér's condition. The large deviation x-region under consideration is broader than in the classical Cramér's theorem, and the estimate of the remainder is uniform with respect to x. The corresponding asymptotic expansion with arbitrarily many summands is also obtained. 相似文献
39.
Toshiyuki Sato Yoshihide Kakizawa & Masanobu Taniguchi 《Australian & New Zealand Journal of Statistics》1998,40(1):17-29
This paper discusses the large deviation principle of several important statistics for short- and long-memory Gaussian processes. First, large deviation theorems for the log-likelihood ratio and quadratic forms for a short-memory Gaussian process with mean function are proved. Their asymptotics are described by the large deviation rate functions. Since they are complicated, they are numerically evaluated and illustrated using the Maple V system (Char et al ., 1991a,b). Second, the large deviation theorem of the log-likelihood ratio statistic for a long-memory Gaussian process with constant mean is proved. The asymptotics of the long-memory case differ greatly from those of the short-memory case. The maximum likelihood estimator of a spectral parameter for a short-memory Gaussian stationary process is asymptotically efficient in the sense of Bahadur. 相似文献
40.
ROC analysis involving two large datasets is an important method for analyzing statistics of interest for decision making of a classifier in many disciplines. And data dependency due to multiple use of the same subjects exists ubiquitously in order to generate more samples because of limited resources. Hence, a two-layer data structure is constructed and the nonparametric two-sample two-layer bootstrap is employed to estimate standard errors of statistics of interest derived from two sets of data, such as a weighted sum of two probabilities. In this article, to reduce the bootstrap variance and ensure the accuracy of computation, Monte Carlo studies of bootstrap variability were carried out to determine the appropriate number of bootstrap replications in ROC analysis with data dependency. It is suggested that with a tolerance 0.02 of the coefficient of variation, 2,000 bootstrap replications be appropriate under such circumstances. 相似文献