全文获取类型
收费全文 | 842篇 |
免费 | 55篇 |
国内免费 | 2篇 |
专业分类
管理学 | 39篇 |
民族学 | 4篇 |
人口学 | 38篇 |
丛书文集 | 12篇 |
理论方法论 | 106篇 |
综合类 | 60篇 |
社会学 | 213篇 |
统计学 | 427篇 |
出版年
2023年 | 14篇 |
2022年 | 12篇 |
2021年 | 16篇 |
2020年 | 30篇 |
2019年 | 43篇 |
2018年 | 50篇 |
2017年 | 61篇 |
2016年 | 44篇 |
2015年 | 41篇 |
2014年 | 37篇 |
2013年 | 185篇 |
2012年 | 54篇 |
2011年 | 38篇 |
2010年 | 27篇 |
2009年 | 39篇 |
2008年 | 26篇 |
2007年 | 21篇 |
2006年 | 27篇 |
2005年 | 18篇 |
2004年 | 26篇 |
2003年 | 16篇 |
2002年 | 13篇 |
2001年 | 21篇 |
2000年 | 8篇 |
1999年 | 8篇 |
1998年 | 8篇 |
1997年 | 5篇 |
1996年 | 2篇 |
1995年 | 2篇 |
1994年 | 3篇 |
1993年 | 1篇 |
1992年 | 1篇 |
1991年 | 1篇 |
1987年 | 1篇 |
排序方式: 共有899条查询结果,搜索用时 31 毫秒
11.
In a missing data setting, we have a sample in which a vector of explanatory variables ${\bf x}_i$ is observed for every subject i, while scalar responses $y_i$ are missing by happenstance on some individuals. In this work we propose robust estimators of the distribution of the responses assuming missing at random (MAR) data, under a semiparametric regression model. Our approach allows the consistent estimation of any weakly continuous functional of the response's distribution. In particular, strongly consistent estimators of any continuous location functional, such as the median, L‐functionals and M‐functionals, are proposed. A robust fit for the regression model combined with the robust properties of the location functional gives rise to a robust recipe for estimating the location parameter. Robustness is quantified through the breakdown point of the proposed procedure. The asymptotic distribution of the location estimators is also derived. The proofs of the theorems are presented in Supplementary Material available online. The Canadian Journal of Statistics 41: 111–132; 2013 © 2012 Statistical Society of Canada 相似文献
12.
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re‐analysis of data from a confirmatory clinical trial in depression. A likelihood‐based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug‐treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was ? 2.79 (p = .013). In placebo multiple imputation, the result was ? 2.17. Results from the other sensitivity analyses ranged from ? 2.21 to ? 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
13.
Andrea Mercatanti 《Australian & New Zealand Journal of Statistics》2013,55(2):129-153
The exclusion restriction is usually assumed for identifying causal effects in true or only natural randomized experiments with noncompliance. It requires that the assignment to treatment does not have a direct causal effect on the outcome. Despite its importance, the restriction can often be unrealistic, especially in situations of natural experiments. It is shown that, without the exclusion restriction, the parametric model is identified if the outcome distributions of various compliance statuses are in the same parametric class and that class is a linearly independent set over the field of real numbers. However, the relaxation of the exclusion restriction yields a parametric model that is characterized by the presence of mixtures of distributions. This scenario complicates the likelihood‐based estimation procedures because it implies more than one maximum likelihood point. A two‐step estimation procedure based on detecting the root that is closest to the method of moments estimate of the parameter vector is then proposed and analyzed in detail, under normally distributed outcomes. An economic example with real data concerning returns to schooling concludes the paper. 相似文献
14.
In a missing-data setting, we want to estimate the mean of a scalar outcome, based on a sample in which an explanatory variable is observed for every subject while responses are missing by happenstance for some of them. We consider two kinds of estimates of the mean response when the explanatory variable is functional. One is based on the average of the predicted values and the second one is a functional adaptation of the Horvitz–Thompson estimator. We show that the infinite dimensionality of the problem does not affect the rates of convergence by stating that the estimates are root-n consistent, under missing at random (MAR) assumption. These asymptotic features are completed by simulated experiments illustrating the easiness of implementation and the good behaviour on finite sample sizes of the method. This is the first paper emphasizing that the insensitiveness of averaged estimates, well known in multivariate non-parametric statistics, remains true for an infinite-dimensional covariable. In this sense, this work opens the way for various other results of this kind in functional data analysis. 相似文献
15.
Testing the equal means hypothesis of a bivariate normal distribution with homoscedastic varlates when the data are incomplete is considered. If the correlational parameter, ρ, is known, the well-known theory of the general linear model is easily employed to construct the likelihood ratio test for the two sided alternative. A statistic, T, for the case of ρ unknown is proposed by direct analogy to the likelihood ratio statistic when ρ is known. The null and nonnull distribution of T is investigated by Monte Carlo techniques. It is concluded that T may be compared to the conventional t distribution for testing the null hypothesis and that this procedure results in a substantial increase in power-efficiency over the procedure based on the paired t test which ignores the incomplete data. A Monte Carlo comparison to two statistics proposed by Lin and Stivers (1974) suggests that the test based on T is more conservative than either of their statistics. 相似文献
16.
Researchers have proposed that hospitals with excessive statistically unexplained mortality rates are more likely to have quality-of-care problems. The U.S. Health Care Financing Administration currently uses this statistical “outlier” approach to screen for poor quality in hospitals. Little is known, however, about the validity of this technique, since direct measures of quality are difficult to obtain. We use Monte Carlo methods to evaluate the performance of the outlier technique as parameters of the true mortality process are varied. Results indicate that the screening ability of the technique may be very sensitive to how widespread quality-related mortality is among hospitals but insensitive to other factors generally thought to be important. 相似文献
17.
AbstractWe suggest shrinkage based technique for estimating covariance matrix in the high-dimensional normal model with missing data. Our approach is based on the monotone missing scheme assumption, meaning that missing values patterns occur completely at random. Our asymptotic framework allows the dimensionality p grow to infinity together with the sample size, N, and extends the methodology of Ledoit and Wolf (2004) to the case of two-step monotone missing data. Two new shrinkage-type estimators are derived and their dominance properties over the Ledoit and Wolf (2004) estimator are shown under the expected quadratic loss. We perform a simulation study and conclude that the proposed estimators are successful for a range of missing data scenarios. 相似文献
18.
An imputation procedure is a procedure by which each missing value in a data set is replaced (imputed) by an observed value using a predetermined resampling procedure. The distribution of a statistic computed from a data set consisting of observed and imputed values, called a completed data set, is affecwd by the imputation procedure used. In a Monte Carlo experiment, three imputation procedures are compared with respect to the empirical behavior of the goodness-of- fit chi-square statistic computed from a completed data set. The results show that each imputation procedure affects the distribution of the goodness-of-fit chi-square statistic in 3. different manner. However, when the empirical behavior of the goodness-of-fit chi-square statistic is compared u, its appropriate asymptotic distribution, there are no substantial differences between these imputation procedures. 相似文献
19.
In some crossover experiments, particularly in medical applications, subjects may fail to complete their sequences of treatments for reasons unconnected with the treatments received. A method is described of assessing the robustness of a planned crossover design, with more than two periods, to subjects leaving the study prematurely. The method involves computing measures of efficiency for every possible design that can result, and is therefore very computationally intensive. Summaries of these measures are used to choose between competing designs. The computational problem is reduced to a manageable size by a software implementation of Polya theory. The method is applied to comparing designs for crossover studies involving four treatments and four periods. Designs are identified that are more robust to subjects dropping out in the final period than those currently favoured in medical and clinical trials. 相似文献
20.
The zero-inflated Poisson model and the decayed, missing and filled teeth index in dental epidemiology 总被引:4,自引:0,他引:4
D. Böhning E. Dietz P. Schlattmann L. Mendonça & U. Kirchner 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》1999,162(2):195-209
For frequency counts, the situation of extra zeros often arises in biomedical applications. This is demonstrated with count data from a dental epidemiological study in Belo Horizonte (the Belo Horizonte caries prevention study) which evaluated various programmes for reducing caries. Extra zeros, however, violate the variance–mean relationship of the Poisson error structure. This extra-Poisson variation can easily be explained by a special mixture model, the zero-inflated Poisson (ZIP) model. On the basis of the ZIP model, a graphical device is presented which not only summarizes the mixing distribution but also provides visual information about the overall mean. This device can be exploited to evaluate and compare various groups. Ways are discussed to include covariates and to develop an extension of the conventional Poisson regression. Finally, a method to evaluate intervention effects on the basis of the ZIP regression model is described and applied to the data of the Belo Horizonte caries prevention study. 相似文献