全文获取类型
收费全文 | 4198篇 |
免费 | 106篇 |
国内免费 | 16篇 |
专业分类
管理学 | 219篇 |
民族学 | 1篇 |
人口学 | 37篇 |
丛书文集 | 22篇 |
理论方法论 | 20篇 |
综合类 | 355篇 |
社会学 | 29篇 |
统计学 | 3637篇 |
出版年
2024年 | 2篇 |
2023年 | 24篇 |
2022年 | 40篇 |
2021年 | 25篇 |
2020年 | 74篇 |
2019年 | 152篇 |
2018年 | 169篇 |
2017年 | 274篇 |
2016年 | 140篇 |
2015年 | 86篇 |
2014年 | 118篇 |
2013年 | 1258篇 |
2012年 | 374篇 |
2011年 | 109篇 |
2010年 | 123篇 |
2009年 | 141篇 |
2008年 | 127篇 |
2007年 | 96篇 |
2006年 | 96篇 |
2005年 | 98篇 |
2004年 | 82篇 |
2003年 | 63篇 |
2002年 | 70篇 |
2001年 | 67篇 |
2000年 | 58篇 |
1999年 | 64篇 |
1998年 | 59篇 |
1997年 | 45篇 |
1996年 | 30篇 |
1995年 | 26篇 |
1994年 | 39篇 |
1993年 | 27篇 |
1992年 | 26篇 |
1991年 | 13篇 |
1990年 | 18篇 |
1989年 | 9篇 |
1988年 | 19篇 |
1987年 | 10篇 |
1986年 | 6篇 |
1985年 | 4篇 |
1984年 | 14篇 |
1983年 | 13篇 |
1982年 | 8篇 |
1981年 | 5篇 |
1980年 | 2篇 |
1979年 | 6篇 |
1978年 | 6篇 |
1977年 | 2篇 |
1975年 | 2篇 |
1973年 | 1篇 |
排序方式: 共有4320条查询结果,搜索用时 2 毫秒
31.
To reduce nonresponse bias in sample surveys, a method of nonresponse weighting adjustment is often used which consists of multiplying the sampling weight of the respondent by the inverse of the estimated response probability. The authors examine the asymptotic properties of this estimator. They prove that it is generally more efficient than an estimator which uses the true response probability, provided that the parameters which govern this probability are estimated by maximum likelihood. The authors discuss variance estimation methods that account for the effect of using the estimated response probability; they compare their performances in a small simulation study. They also discuss extensions to the regression estimator. 相似文献
32.
A novel framework is proposed for the estimation of multiple sinusoids from irregularly sampled time series. This spectral analysis problem is addressed as an under-determined inverse problem, where the spectrum is discretized on an arbitrarily thin frequency grid. As we focus on line spectra estimation, the solution must be sparse, i.e. the amplitude of the spectrum must be zero almost everywhere. Such prior information is taken into account within the Bayesian framework. Two models are used to account for the prior sparseness of the solution, namely a Laplace prior and a Bernoulli–Gaussian prior, associated to optimization and stochastic sampling algorithms, respectively. Such approaches are efficient alternatives to usual sequential prewhitening methods, especially in case of strong sampling aliases perturbating the Fourier spectrum. Both methods should be intensively tested on real data sets by physicists. 相似文献
33.
James P. McDermott G. Jogesh Babu John C. Liechty Dennis K. J. Lin 《Statistics and Computing》2007,17(4):311-321
We consider the problem of density estimation when the data is in the form of a continuous stream with no fixed length. In
this setting, implementations of the usual methods of density estimation such as kernel density estimation are problematic.
We propose a method of density estimation for massive datasets that is based upon taking the derivative of a smooth curve
that has been fit through a set of quantile estimates. To achieve this, a low-storage, single-pass, sequential method is proposed
for simultaneous estimation of multiple quantiles for massive datasets that form the basis of this method of density estimation.
For comparison, we also consider a sequential kernel density estimator. The proposed methods are shown through simulation
study to perform well and to have several distinct advantages over existing methods. 相似文献
34.
Jason P. Fine David V. Glidden Kristine E. Lee 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2003,65(1):317-329
Summary. We propose a simple estimation procedure for a proportional hazards frailty regression model for clustered survival data in which the dependence is generated by a positive stable distribution. Inferences for the frailty parameter can be obtained by using output from Cox regression analyses. The computational burden is substantially less than that of the other approaches to estimation. The large sample behaviour of the estimator is studied and simulations show that the approximations are appropriate for use with realistic sample sizes. The methods are motivated by studies of familial associations in the natural history of diseases. Their practical utility is illustrated with sib pair data from Beaver Dam, Wisconsin. 相似文献
35.
CATIA SCRICCIOLO 《Scandinavian Journal of Statistics》2007,34(3):626-642
Abstract. We consider the problem of estimating a compactly supported density taking a Bayesian nonparametric approach. We define a Dirichlet mixture prior that, while selecting piecewise constant densities, has full support on the Hellinger metric space of all commonly dominated probability measures on a known bounded interval. We derive pointwise rates of convergence for the posterior expected density by studying the speed at which the posterior mass accumulates on shrinking Hellinger neighbourhoods of the sampling density. If the data are sampled from a strictly positive, α -Hölderian density, with α ∈ ( 0,1] , then the optimal convergence rate n− α / (2 α +1) is obtained up to a logarithmic factor. Smoothing histograms by polygons, a continuous piecewise linear estimator is obtained that for twice continuously differentiable, strictly positive densities satisfying boundary conditions attains a rate comparable up to a logarithmic factor to the convergence rate n −4/5 for integrated mean squared error of kernel type density estimators. 相似文献
36.
The ability to infer parameters of gene regulatory networks is emerging as a key problem in systems biology. The biochemical
data are intrinsically stochastic and tend to be observed by means of discrete-time sampling systems, which are often limited
in their completeness. In this paper we explore how to make Bayesian inference for the kinetic rate constants of regulatory
networks, using the stochastic kinetic Lotka-Volterra system as a model. This simple model describes behaviour typical of
many biochemical networks which exhibit auto-regulatory behaviour. Various MCMC algorithms are described and their performance
evaluated in several data-poor scenarios. An algorithm based on an approximating process is shown to be particularly efficient. 相似文献
37.
Robin Willink 《Revue canadienne de statistique》2008,36(4):623-637
If the unknown mean of a univariate population is sufficiently close to the value of an initial guess then an appropriate shrinkage estimator has smaller average squared error than the sample mean. This principle has been known for some time, but it does not appear to have found extension to problems of interval estimation. The author presents valid two‐sided 95% and 99% “shrinkage” confidence intervals for the mean of a normal distribution. These intervals are narrower than the usual interval based on the Student distribution when the population mean lies in such an “effective interval.” A reduction of 20% in the mean width of the interval is possible when the population mean is sufficiently close to the value of the guess. The author also describes a modification to existing shrinkage point estimators of the general univariate mean that enables the effective interval to be enlarged. 相似文献
38.
JØRUND GÅSEMYR 《Scandinavian Journal of Statistics》2003,30(1):159-173
In this paper, we present a general formulation of an algorithm, the adaptive independent chain (AIC), that was introduced in a special context in Gåsemyr et al . [ Methodol. Comput. Appl. Probab. 3 (2001)]. The algorithm aims at producing samples from a specific target distribution Π, and is an adaptive, non-Markovian version of the Metropolis–Hastings independent chain. A certain parametric class of possible proposal distributions is fixed, and the parameters of the proposal distribution are updated periodically on the basis of the recent history of the chain, thereby obtaining proposals that get ever closer to Π. We show that under certain conditions, the algorithm produces an exact sample from Π in a finite number of iterations, and hence that it converges to Π. We also present another adaptive algorithm, the componentwise adaptive independent chain (CAIC), which may be an alternative in particular in high dimensions. The CAIC may be regarded as an adaptive approximation to the Gibbs sampler updating parametric approximations to the conditionals of Π. 相似文献
39.
Hee-Seok Oh Ta-Hsin Li 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(1):221-238
Summary. The paper considers the problem of estimating the entire temperature field for every location on the globe from scattered surface air temperatures observed by a network of weather-stations. Classical methods such as spherical harmonics and spherical smoothing splines are not efficient in representing data that have inherent multiscale structures. The paper presents an estimation method that can adapt to the multiscale characteristics of the data. The method is based on a spherical wavelet approach that has recently been developed for a multiscale representation and analysis of scattered data. Spatially adaptive estimators are obtained by coupling the spherical wavelets with different thresholding (selective reconstruction) techniques. These estimators are compared for their spatial adaptability and extrapolation performance by using the surface air temperature data. 相似文献
40.
Modeling for Risk Assessment of Neurotoxic Effects 总被引:2,自引:0,他引:2
The regulation of noncancer toxicants, including neurotoxicants, has usually been based upon a reference dose (allowable daily intake). A reference dose is obtained by dividing a no-observed-effect level by uncertainty (safety) factors to account for intraspecies and interspecies sensitivities to a chemical. It is assumed that the risk at the reference dose is negligible, but no attempt generally is made to estimate the risk at the reference dose. A procedure is outlined that provides estimates of risk as a function of dose. The first step is to establish a mathematical relationship between a biological effect and the dose of a chemical. Knowledge of biological mechanisms and/or pharmacokinetics can assist in the choice of plausible mathematical models. The mathematical model provides estimates of average responses as a function of dose. Secondly, estimates of risk require selection of a distribution of individual responses about the average response given by the mathematical model. In the case of a normal or lognormal distribution, only an estimate of the standard deviation is needed. The third step is to define an adverse level for a response so that the probability (risk) of exceeding that level can be estimated as a function of dose. Because a firm response level often cannot be established at which adverse biological effects occur, it may be necessary to at least establish an abnormal response level that only a small proportion of individuals would exceed in an unexposed group. That is, if a normal range of responses can be established, then the probability (risk) of abnormal responses can be estimated. In order to illustrate this process, measures of the neurotransmitter serotonin and its metabolite 5-hydroxyindoleacetic acid in specific areas of the brain of rats and monkeys are analyzed after exposure to the neurotoxicant methylene-dioxymethamphetamine. These risk estimates are compared with risk estimates from the quantal approach in which animals are classified as either abnormal or not depending upon abnormal serotonin levels. 相似文献