全文获取类型
收费全文 | 1774篇 |
免费 | 34篇 |
国内免费 | 1篇 |
专业分类
管理学 | 72篇 |
民族学 | 2篇 |
人口学 | 22篇 |
丛书文集 | 11篇 |
理论方法论 | 9篇 |
综合类 | 111篇 |
社会学 | 53篇 |
统计学 | 1529篇 |
出版年
2023年 | 9篇 |
2022年 | 7篇 |
2021年 | 9篇 |
2020年 | 24篇 |
2019年 | 76篇 |
2018年 | 94篇 |
2017年 | 148篇 |
2016年 | 40篇 |
2015年 | 39篇 |
2014年 | 59篇 |
2013年 | 482篇 |
2012年 | 145篇 |
2011年 | 48篇 |
2010年 | 49篇 |
2009年 | 45篇 |
2008年 | 61篇 |
2007年 | 53篇 |
2006年 | 49篇 |
2005年 | 37篇 |
2004年 | 40篇 |
2003年 | 34篇 |
2002年 | 39篇 |
2001年 | 31篇 |
2000年 | 30篇 |
1999年 | 32篇 |
1998年 | 19篇 |
1997年 | 13篇 |
1996年 | 12篇 |
1995年 | 6篇 |
1994年 | 7篇 |
1993年 | 6篇 |
1992年 | 9篇 |
1991年 | 9篇 |
1990年 | 5篇 |
1989年 | 1篇 |
1988年 | 7篇 |
1987年 | 1篇 |
1986年 | 3篇 |
1985年 | 6篇 |
1984年 | 5篇 |
1983年 | 7篇 |
1982年 | 6篇 |
1981年 | 2篇 |
1980年 | 2篇 |
1979年 | 2篇 |
1977年 | 1篇 |
排序方式: 共有1809条查询结果,搜索用时 15 毫秒
1.
If a population contains many zero values and the sample size is not very large, the traditional normal approximation‐based confidence intervals for the population mean may have poor coverage probabilities. This problem is substantially reduced by constructing parametric likelihood ratio intervals when an appropriate mixture model can be found. In the context of survey sampling, however, there is a general preference for making minimal assumptions about the population under study. The authors have therefore investigated the coverage properties of nonparametric empirical likelihood confidence intervals for the population mean. They show that under a variety of hypothetical populations, these intervals often outperformed parametric likelihood intervals by having more balanced coverage rates and larger lower bounds. The authors illustrate their methodology using data from the Canadian Labour Force Survey for the year 2000. 相似文献
2.
We discuss Bayesian analyses of traditional normal-mixture models for classification and discrimination. The development involves application of an iterative resampling approach to Monte Carlo inference, commonly called Gibbs sampling, and demonstrates routine application. We stress the benefits of exact analyses over traditional classification and discrimination techniques, including the ease with which such analyses may be performed in a quite general setting, with possibly several normal-mixture components having different covariance matrices, the computation of exact posterior classification probabilities for observed data and for future cases to be classified, and posterior distributions for these probabilities that allow for assessment of second-level uncertainties in classification. 相似文献
3.
Stephen Walker 《Statistics and Computing》1995,5(4):311-315
Laud et al. (1993) describe a method for random variate generation from D-distributions. In this paper an alternative method using substitution sampling is given. An algorithm for the random variate generation from SD-distributions is also given. 相似文献
4.
Projecting losses associated with hurricanes is a complex and difficult undertaking that is wrought with uncertainties. Hurricane Charley, which struck southwest Florida on August 13, 2004, illustrates the uncertainty of forecasting damages from these storms. Due to shifts in the track and the rapid intensification of the storm, real-time estimates grew from 2 to 3 billion dollars in losses late on August 12 to a peak of 50 billion dollars for a brief time as the storm appeared to be headed for the Tampa Bay area. The storm hit the resort areas of Charlotte Harbor near Punta Gorda and then went on to Orlando in the central part of the state, with early poststorm estimates converging on a damage estimate in the 28 to 31 billion dollars range. Comparable damage to central Florida had not been seen since Hurricane Donna in 1960. The Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) has recognized the role of computer models in projecting losses from hurricanes. The FCHLPM established a professional team to perform onsite (confidential) audits of computer models developed by several different companies in the United States that seek to have their models approved for use in insurance rate filings in Florida. The team's members represent the fields of actuarial science, computer science, meteorology, statistics, and wind and structural engineering. An important part of the auditing process requires uncertainty and sensitivity analyses to be performed with the applicant's proprietary model. To influence future such analyses, an uncertainty and sensitivity analysis has been completed for loss projections arising from use of a Holland B parameter hurricane wind field model. Uncertainty analysis quantifies the expected percentage reduction in the uncertainty of wind speed and loss that is attributable to each of the input variables. 相似文献
5.
6.
Annual concentrations of toxic air contaminants are of primary concern from the perspective of chronic human exposure assessment and risk analysis. Despite recent advances in air quality monitoring technology, resource and technical constraints often impose limitations on the availability of a sufficient number of ambient concentration measurements for performing environmental risk analysis. Therefore, sample size limitations, representativeness of data, and uncertainties in the estimated annual mean concentration must be examined before performing quantitative risk analysis. In this paper, we discuss several factors that need to be considered in designing field-sampling programs for toxic air contaminants and in verifying compliance with environmental regulations. Specifically, we examine the behavior of SO2, TSP, and CO data as surrogates for toxic air contaminants and as examples of point source, area source, and line source-dominated pollutants, respectively, from the standpoint of sampling design. We demonstrate the use of bootstrap resampling method and normal theory in estimating the annual mean concentration and its 95% confidence bounds from limited sampling data, and illustrate the application of operating characteristic (OC) curves to determine optimum sample size and other sampling strategies. We also outline a statistical procedure, based on a one-sided t-test, that utilizes the sampled concentration data for evaluating whether a sampling site is compliance with relevant ambient guideline concentrations for toxic air contaminants. 相似文献
7.
The authors consider the optimal design of sampling schedules for binary sequence data. They propose an approach which allows a variety of goals to be reflected in the utility function by including deterministic sampling cost, a term related to prediction, and if relevant, a term related to learning about a treatment effect To this end, they use a nonparametric probability model relying on a minimal number of assumptions. They show how their assumption of partial exchangeability for the binary sequence of data allows the sampling distribution to be written as a mixture of homogeneous Markov chains of order k. The implementation follows the approach of Quintana & Müller (2004), which uses a Dirichlet process prior for the mixture. 相似文献
8.
To reduce nonresponse bias in sample surveys, a method of nonresponse weighting adjustment is often used which consists of multiplying the sampling weight of the respondent by the inverse of the estimated response probability. The authors examine the asymptotic properties of this estimator. They prove that it is generally more efficient than an estimator which uses the true response probability, provided that the parameters which govern this probability are estimated by maximum likelihood. The authors discuss variance estimation methods that account for the effect of using the estimated response probability; they compare their performances in a small simulation study. They also discuss extensions to the regression estimator. 相似文献
9.
JØRUND GÅSEMYR 《Scandinavian Journal of Statistics》2003,30(1):159-173
In this paper, we present a general formulation of an algorithm, the adaptive independent chain (AIC), that was introduced in a special context in Gåsemyr et al . [ Methodol. Comput. Appl. Probab. 3 (2001)]. The algorithm aims at producing samples from a specific target distribution Π, and is an adaptive, non-Markovian version of the Metropolis–Hastings independent chain. A certain parametric class of possible proposal distributions is fixed, and the parameters of the proposal distribution are updated periodically on the basis of the recent history of the chain, thereby obtaining proposals that get ever closer to Π. We show that under certain conditions, the algorithm produces an exact sample from Π in a finite number of iterations, and hence that it converges to Π. We also present another adaptive algorithm, the componentwise adaptive independent chain (CAIC), which may be an alternative in particular in high dimensions. The CAIC may be regarded as an adaptive approximation to the Gibbs sampler updating parametric approximations to the conditionals of Π. 相似文献
10.
Petros Dellaportas 《Statistics and Computing》1995,5(2):133-140
In the non-conjugate Gibbs sampler, the required sampling from the full conditional densities needs the adoption of black-box sampling methods. Recent suggestions include rejection sampling, adaptive rejection sampling, generalized ratio of uniforms, and the Griddy-Gibbs sampler. This paper describes a general idea based on variate transformations which can be tailored in all the above methods and increase the Gibbs sampler efficiency. Moreover, a simple technique to assess convergence is suggested and illustrative examples are presented. 相似文献