全文获取类型
收费全文 | 3924篇 |
免费 | 77篇 |
国内免费 | 14篇 |
专业分类
管理学 | 179篇 |
民族学 | 1篇 |
人口学 | 37篇 |
丛书文集 | 21篇 |
理论方法论 | 17篇 |
综合类 | 318篇 |
社会学 | 24篇 |
统计学 | 3418篇 |
出版年
2024年 | 1篇 |
2023年 | 21篇 |
2022年 | 35篇 |
2021年 | 23篇 |
2020年 | 70篇 |
2019年 | 146篇 |
2018年 | 161篇 |
2017年 | 269篇 |
2016年 | 127篇 |
2015年 | 79篇 |
2014年 | 111篇 |
2013年 | 1177篇 |
2012年 | 350篇 |
2011年 | 96篇 |
2010年 | 116篇 |
2009年 | 133篇 |
2008年 | 120篇 |
2007年 | 89篇 |
2006年 | 91篇 |
2005年 | 87篇 |
2004年 | 74篇 |
2003年 | 59篇 |
2002年 | 66篇 |
2001年 | 61篇 |
2000年 | 59篇 |
1999年 | 60篇 |
1998年 | 53篇 |
1997年 | 42篇 |
1996年 | 23篇 |
1995年 | 20篇 |
1994年 | 26篇 |
1993年 | 19篇 |
1992年 | 23篇 |
1991年 | 8篇 |
1990年 | 15篇 |
1989年 | 9篇 |
1988年 | 17篇 |
1987年 | 8篇 |
1986年 | 6篇 |
1985年 | 5篇 |
1984年 | 13篇 |
1983年 | 13篇 |
1982年 | 8篇 |
1981年 | 7篇 |
1980年 | 3篇 |
1979年 | 6篇 |
1978年 | 5篇 |
1977年 | 2篇 |
1975年 | 2篇 |
1973年 | 1篇 |
排序方式: 共有4015条查询结果,搜索用时 0 毫秒
11.
In the development of many diseases there are often associated random variables which continuously reflect the progress of a subject towards the final expression of the disease (failure). At any given time these processes, which we call stochastic covariates, may provide information about the current hazard and the remaining time to failure. Likewise, in situations when the specific times of key prior events are not known, such as the time of onset of an occult tumour or the time of infection with HIV-1, it may be possible to identify a stochastic covariate which reveals, indirectly, when the event of interest occurred. The analysis of carcinogenicity trials which involve occult tumours is usually based on the time of death or sacrifice and an indicator of tumour presence for each animal in the experiment. However, the size of an occult tumour observed at the endpoint represents data concerning tumour development which may convey additional information concerning both the tumour incidence rate and the rate of death to which tumour-bearing animals are subject. We develop a stochastic model for tumour growth and suggest different ways in which the effect of this growth on the hazard of failure might be modelled. Using a combined model for tumour growth and additive competing risks of death, we show that if this tumour size information is used, assumptions concerning tumour lethality, the context of observation or multiple sacrifice times are no longer necessary in order to estimate the tumour incidence rate. Parametric estimation based on the method of maximum likelihood is outlined and is applied to simulated data from the combined model. The results of this limited study confirm that use of the stochastic covariate tumour size results in more precise estimation of the incidence rate for occult tumours. 相似文献
12.
W. Stute 《Journal of statistical planning and inference》1992,30(3):293-305
We propose a new modified (biased) cross-validation method for adaptively determining the bandwidth in a nonparametric density estimation setup. It is shown that the method provides consistent minimizers. Some simulation results are reported on which compare the small sample behavior of the new and the classical cross-validation selectors. 相似文献
13.
14.
Peter J. Robinson 《Risk analysis》1992,12(1):139-148
Because of the inherent complexity of biological systems, there is often a choice between a number of apparently equally applicable physiologically based models to describe uptake and metabolism processes in toxicology or risk assessment. These models may fit the particular data sets of interest equally well, but may give quite different parameter estimates or predictions under different (extrapolated) conditions. Such competing models can be discriminated by a number of methods, including potential refutation by means of strategic experiments, and their ability to suitably incorporate all relevant physiological processes. For illustration, three currently used models for steady-state hepatic elimination--the venous equilibration model, the parallel tube model, and the distributed sinusoidal perfusion model--are reviewed and compared with particular reference to their application in the area of risk assessment. The ability of each of the models to describe and incorporate such physiological processes as protein binding, precursor-metabolite relations and hepatic zones of elimination, capillary recruitment, capillary heterogeneity, and intrahepatic shunting is discussed. Differences between the models in hepatic parameter estimation, extrapolation to different conditions, and interspecies scaling are discussed, and criteria for choosing one model over the others are presented. In this case, the distributed model provides the most general framework for describing physiological processes taking place in the liver, and has so far not been experimentally refuted, as have the other two models. These simpler models may, however, provide useful bounds on parameter estimates and on extrapolations and risk assessments. 相似文献
15.
The small sample performance of least median of squares, reweighted least squares, least squares, least absolute deviations, and three partially adaptive estimators are compared using Monte Carlo simulations. Two data problems are addressed in the paper: (1) data generated from non-normal error distributions and (2) contaminated data. Breakdown plots are used to investigate the sensitivity of partially adaptive estimators to data contamination relative to RLS. One partially adaptive estimator performs especially well when the errors are skewed, while another partially adaptive estimator and RLS perform particularly well when the errors are extremely leptokur-totic. In comparison with RLS, partially adaptive estimators are only moderately effective in resisting data contamination; however, they outperform least squares and least absolute deviation estimators. 相似文献
16.
The L1 and L2-errors of the histogram estimate of a density f from a sample X1,X2,…,Xn using a cubic partition are shown to be asymptotically normal without any unnecessary conditions imposed on the density f. The asymptotic variances are shown to depend on f only through the corresponding norm of f. From this follows the asymptotic null distribution of a goodness-of-fit test based on the total variation distance, introduced by Györfi and van der Meulen (1991). This note uses the idea of partial inversion for obtaining characteristic functions of conditional distributions, which goes back at least to Bartlett (1938). 相似文献
17.
To reduce nonresponse bias in sample surveys, a method of nonresponse weighting adjustment is often used which consists of multiplying the sampling weight of the respondent by the inverse of the estimated response probability. The authors examine the asymptotic properties of this estimator. They prove that it is generally more efficient than an estimator which uses the true response probability, provided that the parameters which govern this probability are estimated by maximum likelihood. The authors discuss variance estimation methods that account for the effect of using the estimated response probability; they compare their performances in a small simulation study. They also discuss extensions to the regression estimator. 相似文献
18.
James P. McDermott G. Jogesh Babu John C. Liechty Dennis K. J. Lin 《Statistics and Computing》2007,17(4):311-321
We consider the problem of density estimation when the data is in the form of a continuous stream with no fixed length. In
this setting, implementations of the usual methods of density estimation such as kernel density estimation are problematic.
We propose a method of density estimation for massive datasets that is based upon taking the derivative of a smooth curve
that has been fit through a set of quantile estimates. To achieve this, a low-storage, single-pass, sequential method is proposed
for simultaneous estimation of multiple quantiles for massive datasets that form the basis of this method of density estimation.
For comparison, we also consider a sequential kernel density estimator. The proposed methods are shown through simulation
study to perform well and to have several distinct advantages over existing methods. 相似文献
19.
Jason P. Fine David V. Glidden Kristine E. Lee 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2003,65(1):317-329
Summary. We propose a simple estimation procedure for a proportional hazards frailty regression model for clustered survival data in which the dependence is generated by a positive stable distribution. Inferences for the frailty parameter can be obtained by using output from Cox regression analyses. The computational burden is substantially less than that of the other approaches to estimation. The large sample behaviour of the estimator is studied and simulations show that the approximations are appropriate for use with realistic sample sizes. The methods are motivated by studies of familial associations in the natural history of diseases. Their practical utility is illustrated with sib pair data from Beaver Dam, Wisconsin. 相似文献
20.
CATIA SCRICCIOLO 《Scandinavian Journal of Statistics》2007,34(3):626-642
Abstract. We consider the problem of estimating a compactly supported density taking a Bayesian nonparametric approach. We define a Dirichlet mixture prior that, while selecting piecewise constant densities, has full support on the Hellinger metric space of all commonly dominated probability measures on a known bounded interval. We derive pointwise rates of convergence for the posterior expected density by studying the speed at which the posterior mass accumulates on shrinking Hellinger neighbourhoods of the sampling density. If the data are sampled from a strictly positive, α -Hölderian density, with α ∈ ( 0,1] , then the optimal convergence rate n− α / (2 α +1) is obtained up to a logarithmic factor. Smoothing histograms by polygons, a continuous piecewise linear estimator is obtained that for twice continuously differentiable, strictly positive densities satisfying boundary conditions attains a rate comparable up to a logarithmic factor to the convergence rate n −4/5 for integrated mean squared error of kernel type density estimators. 相似文献