首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   279篇
  免费   2篇
  国内免费   1篇
管理学   20篇
人口学   3篇
丛书文集   2篇
理论方法论   21篇
综合类   19篇
社会学   1篇
统计学   216篇
  2023年   1篇
  2022年   1篇
  2020年   2篇
  2019年   6篇
  2018年   3篇
  2017年   9篇
  2016年   2篇
  2015年   4篇
  2014年   5篇
  2013年   89篇
  2012年   21篇
  2011年   7篇
  2010年   6篇
  2009年   11篇
  2008年   7篇
  2007年   9篇
  2006年   7篇
  2005年   7篇
  2004年   10篇
  2003年   5篇
  2002年   4篇
  2001年   6篇
  2000年   5篇
  1999年   7篇
  1998年   5篇
  1997年   4篇
  1996年   2篇
  1995年   8篇
  1994年   3篇
  1993年   4篇
  1992年   5篇
  1991年   5篇
  1990年   3篇
  1989年   1篇
  1987年   1篇
  1985年   2篇
  1984年   1篇
  1982年   2篇
  1979年   1篇
  1975年   1篇
排序方式: 共有282条查询结果,搜索用时 10 毫秒
81.
Various approaches to obtaining estimates based on preliminary data are outlined. A case is then considered which frequently arises when selecting a subsample of units, the information for which is collected within a deadline that allows preliminary estimates to be produced. At the moment when these estimates have to be produced it often occurs that, although the collection of data on subsample units is still not complete, information is available on a set of units which does not belong to the sample selected for the production of the preliminary estimates. An estimation method is proposed which allows all the data available on a given date to be used to the full-and the expression of the expectation and variance are derived. The proposal is based on two-phase sampling theory and on the hypothesis that the response mechanism is the result of random processes whose parameters can be suitably estimated. An empirical analysis of the performance of the estimator on the Italian Survey on building permits concludes the work. The Sects. 1,2,3,4 and the technical appendixes have been developed by Giorgio Alleva and Piero Demetrio Falorsi; Sect. 5 has been done by Fabio Bacchini and Roberto Iannaccone. Piero Demetrio Falorsi is chief statisticians at Italian National Institute of Statistics (ISTAT); Giorgio Alleva is Professor of Statistics at University “La Sapienza” of Rome, Fabio Bacchini and Roberto Iannaccone are researchers at ISTAT.  相似文献   
82.
A NOTE ON VARIANCE ESTIMATION FOR THE GENERALIZED REGRESSION PREDICTOR   总被引:1,自引:0,他引:1  
The generalized regression (GREG) predictor is used for estimating a finite population total when the study variable is well‐related to the auxiliary variable. In 1997, Chaudhuri & Roy provided an optimal estimator for the variance of the GREG predictor within a class of non‐homogeneous quadratic estimators (H) under a certain superpopulation model M. They also found an inequality concerning the expected variances of the estimators of the variance of the GREG predictor belonging to the class H under the model M. This paper shows that the derivation of the optimal estimator and relevant inequality, presented by Chaudhuri & Roy, are incorrect.  相似文献   
83.
This paper develops clinical trial designs that compare two treatments with a binary outcome. The imprecise beta class (IBC), a class of beta probability distributions, is used in a robust Bayesian framework to calculate posterior upper and lower expectations for treatment success rates using accumulating data. The posterior expectation for the difference in success rates can be used to decide when there is sufficient evidence for randomized treatment allocation to cease. This design is formally related to the randomized play‐the‐winner (RPW) design, an adaptive allocation scheme where randomization probabilities are updated sequentially to favour the treatment with the higher observed success rate. A connection is also made between the IBC and the sequential clinical trial design based on the triangular test. Theoretical and simulation results are presented to show that the expected sample sizes on the truly inferior arm are lower using the IBC compared with either the triangular test or the RPW design, and that the IBC performs well against established criteria involving error rates and the expected number of treatment failures.  相似文献   
84.
Using Time Intervals Between Expected Events to Communicate Risk Magnitudes   总被引:1,自引:0,他引:1  
Because members of the public have difficulty understanding risk presented in terms of odds ratios (e.g., 1 in 1000) and in comparing odds ratios from different hazards, we examined the use of time intervals between expected harmful events to communicate risk. Perceptions of the risk from a hypothetical instance of naturally-occurring, cancer-causing arsenic in drinking water supplies was examined with a sample of 705 homeowners. The risk was described as either 1 in 1000 or 1 in 100,000 and as present in a town of 2000 people or a city of 200,000 people. With these parameters, the time intervals ranged from 1 expected death in 3500 years (1 in 100,000 risk, small town) to 1 death every 4 months (1 in 1000 risk, city). The addition of time intervals to the odds ratios significantly decreased perceived threat and perceived need for action in the small town but did not affect response for the city. These framing effects were nearly as large as a 100-fold difference in actual risk. Instances when this communication approach may be useful are discussed.  相似文献   
85.
Combining Probability Distributions From Experts in Risk Analysis   总被引:33,自引:0,他引:33  
This paper concerns the combination of experts' probability distributions in risk analysis, discussing a variety of combination methods and attempting to highlight the important conceptual and practical issues to be considered in designing a combination process in practice. The role of experts is important because their judgments can provide valuable information, particularly in view of the limited availability of hard data regarding many important uncertainties in risk analysis. Because uncertainties are represented in terms of probability distributions in probabilistic risk analysis (PRA), we consider expert information in terms of probability distributions. The motivation for the use of multiple experts is simply the desire to obtain as much information as possible. Combining experts' probability distributions summarizes the accumulated information for risk analysts and decision-makers. Procedures for combining probability distributions are often compartmentalized as mathematical aggregation methods or behavioral approaches, and we discuss both categories. However, an overall aggregation process could involve both mathematical and behavioral aspects, and no single process is best in all circumstances. An understanding of the pros and cons of different methods and the key issues to consider is valuable in the design of a combination process for a specific PRA. The output, a combined probability distribution, can ideally be viewed as representing a summary of the current state of expert opinion regarding the uncertainty of interest.  相似文献   
86.
Survey sampling textbooks often refer to the Sen–Yates–Grundy variance estimator for use with without-replacement unequal probability designs. This estimator is rarely implemented because of the complexity of determining joint inclusion probabilities. In practice, the variance is usually estimated by simpler variance estimators such as the Hansen–Hurwitz with replacement variance estimator; which often leads to overestimation of the variance for large sampling fractions that are common in business surveys. We will consider an alternative estimator: the Hájek (1964 Hájek J 1981 Sampling from a Finite Population New York: Marcel Dekker  [Google Scholar]) variance estimator that depends on the first-order inclusion probabilities only and is usually more accurate than the Hansen–Hurwitz estimator. We review this estimator and show its practical value. We propose a simple alternative expression; which is as simple as the Hansen–Hurwitz estimator. We also show how the Hájek estimator can be easily implemented with standard statistical packages.  相似文献   
87.
In the context of the data and issues discussed by Goldstein and Spiegelhalter, we suggest refinements which can be used by decision makers when confronted with ranking problems associated with 'league tables'. Two ranking criteria are defined and their performance illustrated for one of the studies reported by Goldstein and Spiegelhalter.  相似文献   
88.
We provide Bayesian methodology to relax the assumption that all subpopulation effects in a linear mixed-effects model have, after adjustment for covariates, a common mean. We expand the model specification by assuming that the m subpopulation effects are allowed to cluster into d groups where the value of d, 1?d?m, and the composition of the d groups are unknown, a priori. Specifically, for each partition of the m effects into d groups we only assume that the subpopulation effects in each group are exchangeable and are independent across the groups. We show that failure to take account of this clustering, as with the customary method, will lead to serious errors in inference about the variances and subpopulation effects, but the proposed, expanded, model leads to appropriate inferences. The efficacy of the proposed method is evaluated by contrasting it with both the customary method and use of a Dirichlet process prior. We use data from small area estimation to illustrate our method.  相似文献   
89.
Montesano  Aldo 《Theory and Decision》2001,51(2-4):183-195
The Choquet expected utility model deals with nonadditive probabilities (or capacities). Their dependence on the information the decision-maker has about the possibility of the events is taken into account. Two kinds of information are examined: interval information (for instance, the percentage of white balls in an urn is between 60% and 100%) and comparative information (for instance, the information that there are more white balls than black ones). Some implications are shown with regard to the core of the capacity and to two additive measures which can be derived from capacities: the Shapley value and the nucleolus. Interval information bounds prove to be satisfied by all probabilities in the core, but they are not necessarily satisfied by the nucleolus (when the core is empty) and the Shapley value. We must introduce the constrained nucleolus in order for these bounds to be satisfied, while the Shapley value does not seem to be adjustable. On the contrary, comparative information inequalities prove to be not necessarily satisfied by all probabilities in the core and we must introduce the constrained core in order for these inequalities be satisfied. However, both the nucleolus and the Shapley value satisfy the comparative information inequalities, and the Shapley value does it more strictly than the nucleolus. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   
90.
For the analysis of square contingency tables with nominal categories, Tomizawa and coworkers have considered measures that represent the degree of departure from symmetry. This paper proposes a measure that represents the degree of asymmetry for square contingency tables with ordered categories (instead of those with nominal categories). The measure proposed is expressed using the Cressie–Read power-divergence or Patil–Taillie diversity index, defined for the cumulative probabilities that an observation falls in row (column) category i or below and column (row) category j (> i ) or above. The measure depends on the order of listing the categories. It should be useful for comparing the degree of asymmetry in several tables with ordered categories. The relationship between the measure and the normal distribution is shown.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号