首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1678篇
  免费   34篇
  国内免费   1篇
管理学   68篇
民族学   2篇
人口学   16篇
丛书文集   8篇
理论方法论   8篇
综合类   93篇
社会学   47篇
统计学   1471篇
  2023年   9篇
  2022年   4篇
  2021年   9篇
  2020年   23篇
  2019年   73篇
  2018年   86篇
  2017年   138篇
  2016年   39篇
  2015年   38篇
  2014年   53篇
  2013年   464篇
  2012年   138篇
  2011年   42篇
  2010年   44篇
  2009年   42篇
  2008年   55篇
  2007年   52篇
  2006年   48篇
  2005年   36篇
  2004年   38篇
  2003年   32篇
  2002年   37篇
  2001年   29篇
  2000年   29篇
  1999年   32篇
  1998年   19篇
  1997年   13篇
  1996年   12篇
  1995年   6篇
  1994年   5篇
  1993年   5篇
  1992年   9篇
  1991年   9篇
  1990年   5篇
  1989年   1篇
  1988年   6篇
  1987年   1篇
  1986年   3篇
  1985年   6篇
  1984年   4篇
  1983年   7篇
  1982年   5篇
  1981年   2篇
  1980年   2篇
  1979年   2篇
  1977年   1篇
排序方式: 共有1713条查询结果,搜索用时 78 毫秒
1.
If a population contains many zero values and the sample size is not very large, the traditional normal approximation‐based confidence intervals for the population mean may have poor coverage probabilities. This problem is substantially reduced by constructing parametric likelihood ratio intervals when an appropriate mixture model can be found. In the context of survey sampling, however, there is a general preference for making minimal assumptions about the population under study. The authors have therefore investigated the coverage properties of nonparametric empirical likelihood confidence intervals for the population mean. They show that under a variety of hypothetical populations, these intervals often outperformed parametric likelihood intervals by having more balanced coverage rates and larger lower bounds. The authors illustrate their methodology using data from the Canadian Labour Force Survey for the year 2000.  相似文献   
2.
Summary.  Non-ignorable missing data, a serious problem in both clinical trials and observational studies, can lead to biased inferences. Quality-of-life measures have become increasingly popular in clinical trials. However, these measures are often incompletely observed, and investigators may suspect that missing quality-of-life data are likely to be non-ignorable. Although several recent references have addressed missing covariates in survival analysis, they all required the assumption that missingness is at random or that all covariates are discrete. We present a method for estimating the parameters in the Cox proportional hazards model when missing covariates may be non-ignorable and continuous or discrete. Our method is useful in reducing the bias and improving efficiency in the presence of missing data. The methodology clearly specifies assumptions about the missing data mechanism and, through sensitivity analysis, helps investigators to understand the potential effect of missing data on study results.  相似文献   
3.
Merging information for semiparametric density estimation   总被引:1,自引:0,他引:1  
Summary.  The density ratio model specifies that the likelihood ratio of m −1 probability density functions with respect to the m th is of known parametric form without reference to any parametric model. We study the semiparametric inference problem that is related to the density ratio model by appealing to the methodology of empirical likelihood. The combined data from all the samples leads to more efficient kernel density estimators for the unknown distributions. We adopt variants of well-established techniques to choose the smoothing parameter for the density estimators proposed.  相似文献   
4.
Demonstrated equivalence between a categorical regression model based on case‐control data and an I‐sample semiparametric selection bias model leads to a new goodness‐of‐fit test. The proposed test statistic is an extension of an existing Kolmogorov–Smirnov‐type statistic and is the weighted average of the absolute differences between two estimated distribution functions in each response category. The paper establishes an optimal property for the maximum semiparametric likelihood estimator of the parameters in the I‐sample semiparametric selection bias model. It also presents a bootstrap procedure, some simulation results and an analysis of two real datasets.  相似文献   
5.
We discuss Bayesian analyses of traditional normal-mixture models for classification and discrimination. The development involves application of an iterative resampling approach to Monte Carlo inference, commonly called Gibbs sampling, and demonstrates routine application. We stress the benefits of exact analyses over traditional classification and discrimination techniques, including the ease with which such analyses may be performed in a quite general setting, with possibly several normal-mixture components having different covariance matrices, the computation of exact posterior classification probabilities for observed data and for future cases to be classified, and posterior distributions for these probabilities that allow for assessment of second-level uncertainties in classification.  相似文献   
6.
Laud et al. (1993) describe a method for random variate generation from D-distributions. In this paper an alternative method using substitution sampling is given. An algorithm for the random variate generation from SD-distributions is also given.  相似文献   
7.
The well-known chi-squared goodness-of-fit test for a multinomial distribution is generally biased when the observations are subject to misclassification. In Pardo and Zografos (2000) the problem was considered using a double sampling scheme and ø-divergence test statistics. A new problem appears if the null hypothesis is not simple because it is necessary to give estimators for the unknown parameters. In this paper the minimum ø-divergence estimators are considered and some of their properties are established. The proposed ø-divergence test statistics are obtained by calculating ø-divergences between probability density functions and by replacing parameters by their minimum ø-divergence estimators in the derived expressions. Asymptotic distributions of the new test statistics are also obtained. The testing procedure is illustrated with an example.  相似文献   
8.
论科技期刊的品牌资本   总被引:1,自引:0,他引:1  
科技期刊的品牌资本是其生存和发展的关键因素,品牌资本体现了社会效益与经济效益的同一性。品牌资本的价值回归是一个缓慢但却是相当稳定的过程。通过抽样调查和方差分析,定量说明了上述论点的正确性。  相似文献   
9.
Not having a variance estimator is a seriously weak point of a sampling design from a practical perspective. This paper provides unbiased variance estimators for several sampling designs based on inverse sampling, both with and without an adaptive component. It proposes a new design, which is called the general inverse sampling design, that avoids sampling an infeasibly large number of units. The paper provide estimators for this design as well as its adaptive modification. A simple artificial example is used to demonstrate the computations. The adaptive and non‐adaptive designs are compared using simulations based on real data sets. The results indicate that, for appropriate populations, the adaptive version can have a substantial variance reduction compared with the non‐adaptive version. Also, adaptive general inverse sampling with a limitation on the initial sample size has a greater variance reduction than without the limitation.  相似文献   
10.
Projecting losses associated with hurricanes is a complex and difficult undertaking that is wrought with uncertainties. Hurricane Charley, which struck southwest Florida on August 13, 2004, illustrates the uncertainty of forecasting damages from these storms. Due to shifts in the track and the rapid intensification of the storm, real-time estimates grew from 2 to 3 billion dollars in losses late on August 12 to a peak of 50 billion dollars for a brief time as the storm appeared to be headed for the Tampa Bay area. The storm hit the resort areas of Charlotte Harbor near Punta Gorda and then went on to Orlando in the central part of the state, with early poststorm estimates converging on a damage estimate in the 28 to 31 billion dollars range. Comparable damage to central Florida had not been seen since Hurricane Donna in 1960. The Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) has recognized the role of computer models in projecting losses from hurricanes. The FCHLPM established a professional team to perform onsite (confidential) audits of computer models developed by several different companies in the United States that seek to have their models approved for use in insurance rate filings in Florida. The team's members represent the fields of actuarial science, computer science, meteorology, statistics, and wind and structural engineering. An important part of the auditing process requires uncertainty and sensitivity analyses to be performed with the applicant's proprietary model. To influence future such analyses, an uncertainty and sensitivity analysis has been completed for loss projections arising from use of a Holland B parameter hurricane wind field model. Uncertainty analysis quantifies the expected percentage reduction in the uncertainty of wind speed and loss that is attributable to each of the input variables.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号