全文获取类型
收费全文 | 5836篇 |
免费 | 111篇 |
国内免费 | 21篇 |
专业分类
管理学 | 231篇 |
民族学 | 8篇 |
人口学 | 60篇 |
丛书文集 | 47篇 |
理论方法论 | 34篇 |
综合类 | 552篇 |
社会学 | 56篇 |
统计学 | 4980篇 |
出版年
2024年 | 10篇 |
2023年 | 31篇 |
2022年 | 48篇 |
2021年 | 68篇 |
2020年 | 99篇 |
2019年 | 194篇 |
2018年 | 276篇 |
2017年 | 413篇 |
2016年 | 209篇 |
2015年 | 170篇 |
2014年 | 227篇 |
2013年 | 1781篇 |
2012年 | 467篇 |
2011年 | 181篇 |
2010年 | 168篇 |
2009年 | 182篇 |
2008年 | 178篇 |
2007年 | 132篇 |
2006年 | 104篇 |
2005年 | 131篇 |
2004年 | 114篇 |
2003年 | 87篇 |
2002年 | 89篇 |
2001年 | 74篇 |
2000年 | 70篇 |
1999年 | 67篇 |
1998年 | 84篇 |
1997年 | 55篇 |
1996年 | 25篇 |
1995年 | 32篇 |
1994年 | 21篇 |
1993年 | 23篇 |
1992年 | 21篇 |
1991年 | 12篇 |
1990年 | 16篇 |
1989年 | 9篇 |
1988年 | 15篇 |
1987年 | 8篇 |
1986年 | 3篇 |
1985年 | 15篇 |
1984年 | 9篇 |
1983年 | 18篇 |
1982年 | 7篇 |
1981年 | 3篇 |
1980年 | 6篇 |
1979年 | 3篇 |
1978年 | 2篇 |
1977年 | 7篇 |
1975年 | 2篇 |
1973年 | 1篇 |
排序方式: 共有5968条查询结果,搜索用时 15 毫秒
61.
Gabriel Escarela Luis Carlos Pérez-Ruíz Russell J. Bowater 《Journal of applied statistics》2009,36(6):647-657
A fully parametric first-order autoregressive (AR(1)) model is proposed to analyse binary longitudinal data. By using a discretized version of a copula, the modelling approach allows one to construct separate models for the marginal response and for the dependence between adjacent responses. In particular, the transition model that is focused on discretizes the Gaussian copula in such a way that the marginal is a Bernoulli distribution. A probit link is used to take into account concomitant information in the behaviour of the underlying marginal distribution. Fixed and time-varying covariates can be included in the model. The method is simple and is a natural extension of the AR(1) model for Gaussian series. Since the approach put forward is likelihood-based, it allows interpretations and inferences to be made that are not possible with semi-parametric approaches such as those based on generalized estimating equations. Data from a study designed to reduce the exposure of children to the sun are used to illustrate the methods. 相似文献
62.
Binbing Yu 《Journal of applied statistics》2009,36(7):769-778
In disease screening and diagnosis, often multiple markers are measured and combined to improve the accuracy of diagnosis. McIntosh and Pepe [Combining several screening tests: optimality of the risk score, Biometrics 58 (2002), pp. 657–664] showed that the risk score, defined as the probability of disease conditional on multiple markers, is the optimal function for classification based on the Neyman–Pearson lemma. They proposed a two-step procedure to approximate the risk score. However, the resulting receiver operating characteristic (ROC) curve is only defined in a subrange (L, h) of false-positive rates in (0,1) and the determination of the lower limit L needs extra prior information. In practice, most diagnostic tests are not perfect, and it is usually rare that a single marker is uniformly better than the other tests. Using simulation, I show that multivariate adaptive regression spline is a useful tool to approximate the risk score when combining multiple markers, especially when ROC curves from multiple tests cross. The resulting ROC is defined in the whole range of (0,1) and is easy to implement and has intuitive interpretation. The sample code of the application is shown in the appendix. 相似文献
63.
It sometimes occurs that one or more components of the data exert a disproportionate influence on the model estimation. We need a reliable tool for identifying such troublesome cases in order to decide either eliminate from the sample, when the data collect was badly realized, or otherwise take care on the use of the model because the results could be affected by such components. Since a measure for detecting influential cases in linear regression setting was proposed by Cook [Detection of influential observations in linear regression, Technometrics 19 (1977), pp. 15–18.], apart from the same measure for other models, several new measures have been suggested as single-case diagnostics. For most of them some cutoff values have been recommended (see [D.A. Belsley, E. Kuh, and R.E. Welsch, Regression Diagnostics: Identifying Influential Data and Sources of Collinearity, 2nd ed., John Wiley & Sons, New York, Chichester, Brisban, (2004).], for instance), however the lack of a quantile type cutoff for Cook's statistics has induced the analyst to deal only with index plots as worthy diagnostic tools. Focussed on logistic regression, the aim of this paper is to provide the asymptotic distribution of Cook's distance in order to look for a meaningful cutoff point for detecting influential and leverage observations. 相似文献
64.
Vittorio Addona Masoud Asgharian David B. Wolfson 《Revue canadienne de statistique》2009,37(2):206-218
For many diseases, logistic constraints render large incidence studies difficult to carry out. This becomes a drawback, particularly when a new study is needed each time the incidence rate is investigated in a new population. By carrying out a prevalent cohort study with follow‐up it is possible to estimate the incidence rate if it is constant. The authors derive the maximum likelihood estimator (MLE) of the overall incidence rate, λ, as well as age‐specific incidence rates, by exploiting the epidemiologic relationship, (prevalence odds) = (incidence rate) × (mean duration) (P/[1 ? P] = λ × µ). The authors establish the asymptotic distributions of the MLEs and provide approximate confidence intervals for the parameters. Moreover, the MLE of λ is asymptotically most efficient and is the natural estimator obtained by substituting the marginal maximum likelihood estimators for P and µ into P/[1 ? P] = λ × µ. Following‐up the subjects allows the authors to develop these widely applicable procedures. The authors apply their methods to data collected as part of the Canadian Study of Health and Ageing to estimate the incidence rate of dementia amongst elderly Canadians. The Canadian Journal of Statistics © 2009 Statistical Society of Canada 相似文献
65.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion. 相似文献
66.
The maximum likelihood estimator (MLE) and the likelihood ratio test (LRT) will be considered for making inference about the
scale parameter of the exponential distribution in case of moving extreme ranked set sampling (MERSS). The MLE and LRT can
not be written in closed form. Therefore, a modification of the MLE using the technique suggested by Maharota and Nanda (Biometrika
61:601–606, 1974) will be considered and this modified estimator will be used to modify the LRT to get a test in closed form
for testing a simple hypothesis against one sided alternatives. The same idea will be used to modify the most powerful test
(MPT) for testing a simple hypothesis versus a simple hypothesis to get a test in closed form for testing a simple hypothesis
against one sided alternatives. Then it appears that the modified estimator is a good competitor of the MLE and the modified
tests are good competitors of the LRT using MERSS and simple random sampling (SRS). 相似文献
67.
ROBERT L. PAIGE A. ALEXANDRE TRINDADE P. HARSHINI FERNANDO 《Scandinavian Journal of Statistics》2009,36(1):98-111
Abstract. We propose an easy to implement method for making small sample parametric inference about the root of an estimating equation expressible as a quadratic form in normal random variables. It is based on saddlepoint approximations to the distribution of the estimating equation whose unique root is a parameter's maximum likelihood estimator (MLE), while substituting conditional MLEs for the remaining (nuisance) parameters. Monotoncity of the estimating equation in its parameter argument enables us to relate these approximations to those for the estimator of interest. The proposed method is equivalent to a parametric bootstrap percentile approach where Monte Carlo simulation is replaced by saddlepoint approximation. It finds applications in many areas of statistics including, nonlinear regression, time series analysis, inference on ratios of regression parameters in linear models and calibration. We demonstrate the method in the context of some classical examples from nonlinear regression models and ratios of regression parameter problems. Simulation results for these show that the proposed method, apart from being generally easier to implement, yields confidence intervals with lengths and coverage probabilities that compare favourably with those obtained from several competing methods proposed in the literature over the past half-century. 相似文献
68.
Howard D. Bondell Lexin Li 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2009,71(1):287-299
Summary. The family of inverse regression estimators that was recently proposed by Cook and Ni has proven effective in dimension reduction by transforming the high dimensional predictor vector to its low dimensional projections. We propose a general shrinkage estimation strategy for the entire inverse regression estimation family that is capable of simultaneous dimension reduction and variable selection. We demonstrate that the new estimators achieve consistency in variable selection without requiring any traditional model, meanwhile retaining the root n estimation consistency of the dimension reduction basis. We also show the effectiveness of the new estimators through both simulation and real data analysis. 相似文献
69.
On distribution-weighted partial least squares with diverging number of highly correlated predictors
Li-Ping Zhu Li-Xing Zhu 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2009,71(2):525-548
Summary. Because highly correlated data arise from many scientific fields, we investigate parameter estimation in a semiparametric regression model with diverging number of predictors that are highly correlated. For this, we first develop a distribution-weighted least squares estimator that can recover directions in the central subspace, then use the distribution-weighted least squares estimator as a seed vector and project it onto a Krylov space by partial least squares to avoid computing the inverse of the covariance of predictors. Thus, distrbution-weighted partial least squares can handle the cases with high dimensional and highly correlated predictors. Furthermore, we also suggest an iterative algorithm for obtaining a better initial value before implementing partial least squares. For theoretical investigation, we obtain strong consistency and asymptotic normality when the dimension p of predictors is of convergence rate O { n 1/2 / log ( n )} and o ( n 1/3 ) respectively where n is the sample size. When there are no other constraints on the covariance of predictors, the rates n 1/2 and n 1/3 are optimal. We also propose a Bayesian information criterion type of criterion to estimate the dimension of the Krylov space in the partial least squares procedure. Illustrative examples with a real data set and comprehensive simulations demonstrate that the method is robust to non-ellipticity and works well even in 'small n –large p ' problems. 相似文献
70.
利用收入指标对股票超额收益率进行解释构成了理解"定价异常"的重要方面。为此,基于盈余公告后漂移的理论分析框架,以上证A股2008年1季度至2011年4季度的相关数据为基础,利用标准化预期外收入估计量(SURE)和分类检验模型方法对中国股票市场公告期内股票价格的收入公告后漂移现象进行实证检验,研究发现:在盈余公告期内,预期外收入与股票超额收益率呈现出负相关或是不显著的关系,即中国股票市场的收入公告后漂移效应不显著。之后的稳健性分析也同样证实了负相关或是不显著关系的存在,而这种异常可能与中国股市的弱有效率相关。 相似文献