全文获取类型
收费全文 | 5337篇 |
免费 | 111篇 |
国内免费 | 24篇 |
专业分类
管理学 | 301篇 |
民族学 | 5篇 |
人口学 | 47篇 |
丛书文集 | 48篇 |
理论方法论 | 37篇 |
综合类 | 562篇 |
社会学 | 50篇 |
统计学 | 4422篇 |
出版年
2024年 | 3篇 |
2023年 | 38篇 |
2022年 | 60篇 |
2021年 | 48篇 |
2020年 | 105篇 |
2019年 | 183篇 |
2018年 | 203篇 |
2017年 | 360篇 |
2016年 | 162篇 |
2015年 | 120篇 |
2014年 | 164篇 |
2013年 | 1594篇 |
2012年 | 495篇 |
2011年 | 148篇 |
2010年 | 165篇 |
2009年 | 182篇 |
2008年 | 162篇 |
2007年 | 120篇 |
2006年 | 131篇 |
2005年 | 119篇 |
2004年 | 96篇 |
2003年 | 87篇 |
2002年 | 87篇 |
2001年 | 87篇 |
2000年 | 80篇 |
1999年 | 74篇 |
1998年 | 59篇 |
1997年 | 47篇 |
1996年 | 27篇 |
1995年 | 28篇 |
1994年 | 31篇 |
1993年 | 26篇 |
1992年 | 26篇 |
1991年 | 13篇 |
1990年 | 19篇 |
1989年 | 13篇 |
1988年 | 19篇 |
1987年 | 11篇 |
1986年 | 10篇 |
1985年 | 4篇 |
1984年 | 16篇 |
1983年 | 14篇 |
1982年 | 8篇 |
1981年 | 9篇 |
1980年 | 1篇 |
1979年 | 7篇 |
1978年 | 5篇 |
1977年 | 3篇 |
1975年 | 2篇 |
1973年 | 1篇 |
排序方式: 共有5472条查询结果,搜索用时 15 毫秒
1.
OLIVIER CAPPÉ RANDAL DOUC ERIC MOULINES & CHRISTIAN ROBERT 《Scandinavian Journal of Statistics》2002,29(4):615-635
While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker. 相似文献
2.
If a population contains many zero values and the sample size is not very large, the traditional normal approximation‐based confidence intervals for the population mean may have poor coverage probabilities. This problem is substantially reduced by constructing parametric likelihood ratio intervals when an appropriate mixture model can be found. In the context of survey sampling, however, there is a general preference for making minimal assumptions about the population under study. The authors have therefore investigated the coverage properties of nonparametric empirical likelihood confidence intervals for the population mean. They show that under a variety of hypothetical populations, these intervals often outperformed parametric likelihood intervals by having more balanced coverage rates and larger lower bounds. The authors illustrate their methodology using data from the Canadian Labour Force Survey for the year 2000. 相似文献
3.
Maximum likelihood estimation and goodness-of-fit techniques are used within a competing risks framework to obtain maximum likelihood estimates of hazard, density, and survivor functions for randomly right-censored variables. Goodness-of- fit techniques are used to fit distributions to the crude lifetimes, which are used to obtain an estimate of the hazard function, which, in turn, is used to construct the survivor and density functions of the net lifetime of the variable of interest. If only one of the crude lifetimes can be adequately characterized by a parametric model, then semi-parametric estimates may be obtained using a maximum likelihood estimate of one crude lifetime and the empirical distribution function of the other. Simulation studies show that the survivor function estimates from crude lifetimes compare favourably with those given by the product-limit estimator when crude lifetimes are chosen correctly. Other advantages are discussed. 相似文献
4.
We describe an image reconstruction problem and the computational difficulties arising in determining the maximum a posteriori (MAP) estimate. Two algorithms for tackling the problem, iterated conditional modes (ICM) and simulated annealing, are usually applied pixel by pixel. The performance of this strategy can be poor, particularly for heavily degraded images, and as a potential improvement Jubb and Jennison (1991) suggest the cascade algorithm in which ICM is initially applied to coarser images formed by blocking squares of pixels. In this paper we attempt to resolve certain criticisms of cascade and present a version of the algorithm extended in definition and implementation. As an illustration we apply our new method to a synthetic aperture radar (SAR) image. We also carry out a study of simulated annealing, with and without cascade, applied to a more tractable minimization problem from which we gain insight into the properties of cascade algorithms. 相似文献
5.
We propose some estimators of noncentrality parameters which improve upon usual unbiased estimators under quadratic loss. The distributions we consider are the noncentral chi-square and the noncentral F. However, we give more general results for the family of elliptically contoured distributions and propose a robust dominating estimator. 相似文献
6.
In the development of many diseases there are often associated random variables which continuously reflect the progress of a subject towards the final expression of the disease (failure). At any given time these processes, which we call stochastic covariates, may provide information about the current hazard and the remaining time to failure. Likewise, in situations when the specific times of key prior events are not known, such as the time of onset of an occult tumour or the time of infection with HIV-1, it may be possible to identify a stochastic covariate which reveals, indirectly, when the event of interest occurred. The analysis of carcinogenicity trials which involve occult tumours is usually based on the time of death or sacrifice and an indicator of tumour presence for each animal in the experiment. However, the size of an occult tumour observed at the endpoint represents data concerning tumour development which may convey additional information concerning both the tumour incidence rate and the rate of death to which tumour-bearing animals are subject. We develop a stochastic model for tumour growth and suggest different ways in which the effect of this growth on the hazard of failure might be modelled. Using a combined model for tumour growth and additive competing risks of death, we show that if this tumour size information is used, assumptions concerning tumour lethality, the context of observation or multiple sacrifice times are no longer necessary in order to estimate the tumour incidence rate. Parametric estimation based on the method of maximum likelihood is outlined and is applied to simulated data from the combined model. The results of this limited study confirm that use of the stochastic covariate tumour size results in more precise estimation of the incidence rate for occult tumours. 相似文献
7.
W. Stute 《Journal of statistical planning and inference》1992,30(3):293-305
We propose a new modified (biased) cross-validation method for adaptively determining the bandwidth in a nonparametric density estimation setup. It is shown that the method provides consistent minimizers. Some simulation results are reported on which compare the small sample behavior of the new and the classical cross-validation selectors. 相似文献
8.
Peter J. Robinson 《Risk analysis》1992,12(1):139-148
Because of the inherent complexity of biological systems, there is often a choice between a number of apparently equally applicable physiologically based models to describe uptake and metabolism processes in toxicology or risk assessment. These models may fit the particular data sets of interest equally well, but may give quite different parameter estimates or predictions under different (extrapolated) conditions. Such competing models can be discriminated by a number of methods, including potential refutation by means of strategic experiments, and their ability to suitably incorporate all relevant physiological processes. For illustration, three currently used models for steady-state hepatic elimination--the venous equilibration model, the parallel tube model, and the distributed sinusoidal perfusion model--are reviewed and compared with particular reference to their application in the area of risk assessment. The ability of each of the models to describe and incorporate such physiological processes as protein binding, precursor-metabolite relations and hepatic zones of elimination, capillary recruitment, capillary heterogeneity, and intrahepatic shunting is discussed. Differences between the models in hepatic parameter estimation, extrapolation to different conditions, and interspecies scaling are discussed, and criteria for choosing one model over the others are presented. In this case, the distributed model provides the most general framework for describing physiological processes taking place in the liver, and has so far not been experimentally refuted, as have the other two models. These simpler models may, however, provide useful bounds on parameter estimates and on extrapolations and risk assessments. 相似文献
9.
The small sample performance of least median of squares, reweighted least squares, least squares, least absolute deviations, and three partially adaptive estimators are compared using Monte Carlo simulations. Two data problems are addressed in the paper: (1) data generated from non-normal error distributions and (2) contaminated data. Breakdown plots are used to investigate the sensitivity of partially adaptive estimators to data contamination relative to RLS. One partially adaptive estimator performs especially well when the errors are skewed, while another partially adaptive estimator and RLS perform particularly well when the errors are extremely leptokur-totic. In comparison with RLS, partially adaptive estimators are only moderately effective in resisting data contamination; however, they outperform least squares and least absolute deviation estimators. 相似文献
10.
Annual concentrations of toxic air contaminants are of primary concern from the perspective of chronic human exposure assessment and risk analysis. Despite recent advances in air quality monitoring technology, resource and technical constraints often impose limitations on the availability of a sufficient number of ambient concentration measurements for performing environmental risk analysis. Therefore, sample size limitations, representativeness of data, and uncertainties in the estimated annual mean concentration must be examined before performing quantitative risk analysis. In this paper, we discuss several factors that need to be considered in designing field-sampling programs for toxic air contaminants and in verifying compliance with environmental regulations. Specifically, we examine the behavior of SO2, TSP, and CO data as surrogates for toxic air contaminants and as examples of point source, area source, and line source-dominated pollutants, respectively, from the standpoint of sampling design. We demonstrate the use of bootstrap resampling method and normal theory in estimating the annual mean concentration and its 95% confidence bounds from limited sampling data, and illustrate the application of operating characteristic (OC) curves to determine optimum sample size and other sampling strategies. We also outline a statistical procedure, based on a one-sided t-test, that utilizes the sampled concentration data for evaluating whether a sampling site is compliance with relevant ambient guideline concentrations for toxic air contaminants. 相似文献