全文获取类型
收费全文 | 7104篇 |
免费 | 137篇 |
国内免费 | 66篇 |
专业分类
管理学 | 302篇 |
民族学 | 28篇 |
人才学 | 4篇 |
人口学 | 63篇 |
丛书文集 | 604篇 |
理论方法论 | 122篇 |
综合类 | 4781篇 |
社会学 | 139篇 |
统计学 | 1264篇 |
出版年
2024年 | 15篇 |
2023年 | 37篇 |
2022年 | 44篇 |
2021年 | 53篇 |
2020年 | 74篇 |
2019年 | 125篇 |
2018年 | 133篇 |
2017年 | 186篇 |
2016年 | 143篇 |
2015年 | 181篇 |
2014年 | 329篇 |
2013年 | 618篇 |
2012年 | 440篇 |
2011年 | 386篇 |
2010年 | 361篇 |
2009年 | 358篇 |
2008年 | 407篇 |
2007年 | 469篇 |
2006年 | 490篇 |
2005年 | 421篇 |
2004年 | 375篇 |
2003年 | 341篇 |
2002年 | 288篇 |
2001年 | 278篇 |
2000年 | 178篇 |
1999年 | 84篇 |
1998年 | 79篇 |
1997年 | 63篇 |
1996年 | 46篇 |
1995年 | 57篇 |
1994年 | 54篇 |
1993年 | 42篇 |
1992年 | 32篇 |
1991年 | 22篇 |
1990年 | 19篇 |
1989年 | 31篇 |
1988年 | 19篇 |
1987年 | 12篇 |
1986年 | 6篇 |
1985年 | 1篇 |
1984年 | 1篇 |
1983年 | 3篇 |
1981年 | 2篇 |
1980年 | 2篇 |
1979年 | 1篇 |
1978年 | 1篇 |
排序方式: 共有7307条查询结果,搜索用时 32 毫秒
11.
For a dose finding study in cancer, the most successful dose (MSD), among a group of available doses, is that dose at which the overall success rate is the highest. This rate is the product of the rate of seeing non-toxicities together with the rate of tumor response. A successful dose finding trial in this context is one where we manage to identify the MSD in an efficient manner. In practice we may also need to consider algorithms for identifying the MSD which can incorporate certain restrictions, the most common restriction maintaining the estimated toxicity rate alone below some maximum rate. In this case the MSD may correspond to a different level than that for the unconstrained MSD and, in providing a final recommendation, it is important to underline that it is subject to the given constraint. We work with the approach described in O'Quigley et al. [Biometrics 2001; 57(4):1018-1029]. The focus of that work was dose finding in HIV where both information on toxicity and efficacy were almost immediately available. Recent cancer studies are beginning to fall under this same heading where, as before, toxicity can be quickly evaluated and, in addition, we can rely on biological markers or other measures of tumor response. Mindful of the particular context of cancer, our purpose here is to consider the methodology developed by O'Quigley et al. and its practical implementation. We also carry out a study on the doubly under-parameterized model, developed by O'Quigley et al. but not 相似文献
12.
Weextend Wei and Tanner's (1991) multiple imputation approach insemi-parametric linear regression for univariate censored datato clustered censored data. The main idea is to iterate the followingtwo steps: 1) using the data augmentation to impute for censoredfailure times; 2) fitting a linear model with imputed completedata, which takes into consideration of clustering among failuretimes. In particular, we propose using the generalized estimatingequations (GEE) or a linear mixed-effects model to implementthe second step. Through simulation studies our proposal comparesfavorably to the independence approach (Lee et al., 1993), whichignores the within-cluster correlation in estimating the regressioncoefficient. Our proposal is easy to implement by using existingsoftwares. 相似文献
13.
Simon C. Barry & A. H. Welsh 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2001,63(1):23-31
We consider the method of distance sampling described by Buckland, Anderson, Burnham and Laake in 1993. We explore the properties of the methodology in simple cases chosen to allow direct and accessible comparisons of distance sampling in the design- and model-based frameworks. In particular, we obtain expressions for the bias and variance of the distance sampling estimator of object density and for the expected value of the recommended analytic variance estimator within each framework. These results enable us to clarify aspects of the performance of the methodology which may be of interest to users and potential users of distance sampling. 相似文献
14.
Edwin Choi & Peter Hall 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2000,62(2):461-477
Given a linear time series, e.g. an autoregression of infinite order, we may construct a finite order approximation and use that as the basis for confidence regions. The sieve or autoregressive bootstrap, as this method is often called, is generally seen as a competitor with the better-understood block bootstrap approach. However, in the present paper we argue that, for linear time series, the sieve bootstrap has significantly better performance than blocking methods and offers a wider range of opportunities. In particular, since it does not corrupt second-order properties then it may be used in a double-bootstrap form, with the second bootstrap application being employed to calibrate a basic percentile method confidence interval. This approach confers second-order accuracy without the need to estimate variance. That offers substantial benefits, since variances of statistics based on time series can be difficult to estimate reliably, and—partly because of the relatively small amount of information contained in a dependent process—are notorious for causing problems when used to Studentize. Other advantages of the sieve bootstrap include considerably greater robustness against variations in the choice of the tuning parameter, here equal to the autoregressive order, and the fact that, in contradistinction to the case of the block bootstrap, the percentile t version of the sieve bootstrap may be based on the 'raw' estimator of standard error. In the process of establishing these properties we show that the sieve bootstrap is second order correct. 相似文献
15.
Cornwell, Schmidt, and Sickles (1990) and Kumbhakar (1990), among others, developed stochasticfrontier production models which allow firm specific inefficiency levels to change over time. These studies assumed arbitrary restrictions on the short-run dynamics of efficiency levels which have little theoretical justification. Further, the models are inappropriate for estimation of long-run efficiencies. We consider estimation of an alternative frontier model in which firmspecific technical inefficiency levels are autoregressive. This model is particularly useful to examine a potential dynamic link between technical innovations and production inefficiency levels. We apply our methodology to a panel of US airlines. 相似文献
16.
Designing and integrating composite networks for monitoring multivariate gaussian pollution fields 总被引:2,自引:0,他引:2
J. V. Zidek W. Sun & N. D. Le 《Journal of the Royal Statistical Society. Series C, Applied statistics》2000,49(1):63-79
Networks of ambient monitoring stations are used to monitor environmental pollution fields such as those for acid rain and air pollution. Such stations provide regular measurements of pollutant concentrations. The networks are established for a variety of purposes at various times so often several stations measuring different subsets of pollutant concentrations can be found in compact geographical regions. The problem of statistically combining these disparate information sources into a single 'network' then arises. Capitalizing on the efficiencies so achieved can then lead to the secondary problem of extending this network. The subject of this paper is a set of 31 air pollution monitoring stations in southern Ontario. Each of these regularly measures a particular subset of ionic sulphate, sulphite, nitrite and ozone. However, this subset varies from station to station. For example only two stations measure all four. Some measure just one. We describe a Bayesian framework for integrating the measurements of these stations to yield a spatial predictive distribution for unmonitored sites and unmeasured concentrations at existing stations. Furthermore we show how this network can be extended by using an entropy maximization criterion. The methods assume that the multivariate response field being measured has a joint Gaussian distribution conditional on its mean and covariance function. A conjugate prior is used for these parameters, some of its hyperparameters being fitted empirically. 相似文献
17.
In this paper we consider the problem of estimating a coefficient of a strongly elliptic partial differential operator in stochastic parabolic equations. The coefficient is a bounded function of time. We compute the maximum likelihood estimate of the function on an approximating space (sieve) using a finite number of the spatial Fourier coefficients of the solution and establish conditions that guarantee consistency and asymptotic normality of the resulting estimate as the number of the coefficients increases. The equation is assumed diagonalizable in the sense that all the operators have a common system of eigenfunctions. 相似文献
18.
Stein's method is used to prove the Lindeberg-Feller theorem and a generalization of the Berry-Esséen theorem. The arguments involve only manipulation of probability inequalities, and form an attractive alternative to the less direct Fourier-analytic methods which are traditionally employed. 相似文献
19.
用原子吸收分光光度法测定了卧龙自然保护区大熊猫主食竹种——冷箭竹(Bashania fangiana)、拐棍竹(Fargesia robusta)、华西箭竹(Fargesia nit-ida)、峨眉玉山竹(Yushaia chungii)的叶、笋——幼竹、一年杆、二年杆兵208例中的 Cu、Zn、Mn、Fe、Ca、Mg、K 等七种元素含量.找到了微量元素含量与竹类、竹龄、竹株部位、竹的生长环境(海拔高度、季节)以及与大熊猫食性等的关系。 相似文献
20.
Peter Hall Tapabrata Maiti 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2009,71(3):703-718
Summary. We develop a general non-parametric approach to the analysis of clustered data via random effects. Assuming only that the link function is known, the regression functions and the distributions of both cluster means and observation errors are treated non-parametrically. Our argument proceeds by viewing the observation error at the cluster mean level as though it were a measurement error in an errors-in-variables problem, and using a deconvolution argument to access the distribution of the cluster mean. A Fourier deconvolution approach could be used if the distribution of the error-in-variables were known. In practice it is unknown, of course, but it can be estimated from repeated measurements, and in this way deconvolution can be achieved in an approximate sense. This argument might be interpreted as implying that large numbers of replicates are necessary for each cluster mean distribution, but that is not so; we avoid this requirement by incorporating statistical smoothing over values of nearby explanatory variables. Empirical rules are developed for the choice of smoothing parameter. Numerical simulations, and an application to real data, demonstrate small sample performance for this package of methodology. We also develop theory establishing statistical consistency. 相似文献