首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   160篇
  免费   2篇
管理学   2篇
人口学   5篇
丛书文集   2篇
理论方法论   1篇
综合类   8篇
社会学   1篇
统计学   143篇
  2022年   1篇
  2019年   4篇
  2018年   1篇
  2017年   2篇
  2016年   3篇
  2015年   4篇
  2014年   2篇
  2013年   73篇
  2012年   11篇
  2010年   3篇
  2009年   4篇
  2008年   4篇
  2007年   2篇
  2006年   6篇
  2005年   4篇
  2004年   4篇
  2003年   5篇
  2002年   2篇
  2001年   2篇
  1999年   3篇
  1998年   3篇
  1997年   2篇
  1996年   3篇
  1995年   4篇
  1994年   1篇
  1993年   1篇
  1992年   3篇
  1988年   1篇
  1986年   2篇
  1984年   1篇
  1982年   1篇
排序方式: 共有162条查询结果,搜索用时 15 毫秒
131.
132.
Measures of association are often used to describe the relationship between row and column variables in two—dimensional contingency tables. It is not uncommon in biomedical research to categorize continuous variables to obtain a two—dimensional table. In these situations it is desirable that the measure of association not be too sensitive to changes in the number of categories or to the choice of cut points. To accomplish this objective we attempt to find a measure of association that closely approximates the corresponding measure of association for the underlying distribution.Measures that are close to the underlying measure for various table sizes andcutpoints are called stable measures.  相似文献   
133.
The methodic use of Shannon's entropy as a basic concept, complementing probability, leads to a new class of statistics which provides, inter alia, a measure of mutual dissimilarity y between several frequency distributions. Application to contin-gency tables with any number of dimensions yields a dimension-less, standardised contingency coefficient which depends on the direction of inference and will combine multiplicatively with the number of observed events. This class of statistics further in-cludes a continuous modification W of the number of degrees of freedom in a table, and a measure Q of its overall information content. Numerical illustrations and comparisons with former re-sults are worked out. Direct applications include the optimal partition of a quasicontinuum into cells by maximizing Q, the ordering of unordered tables by minimising local values of y, and a tentative absolute weighting of inductive inference based on the minimal necessary shift, required by an hypothesis, between the actually observed data and a set of assumed future events.  相似文献   
134.
The common view of the history of contingency tables is that it begins in 1900 with the work of Pearson and Yule, but in fact it extends back at least into the 19th century. Moreover, it remains an active area of research today. In this paper we give an overview of this history focussing on the development of log-linear models and their estimation via the method of maximum likelihood. Roy played a crucial role in this development with two papers co-authored with his students, Mitra and Marvin Kastenbaum, at roughly the mid-point temporally in this development. Then we describe a problem that eluded Roy and his students, that of the implications of sampling zeros for the existence of maximum likelihood estimates for log-linear models. Understanding the problem of non-existence is crucial to the analysis of large sparse contingency tables. We introduce some relevant results from the application of algebraic geometry to the study of this statistical problem.  相似文献   
135.
Quasi-life tables, in which the data arise from many concurrent, independent, discrete-time renewal processes, were defined by Baxter (1994, Biometrika 81:567–577), who outlined some methods for estimation. The processes are not observed individually; only the total numbers of renewals at each time point are observed. Crowder and Stephens (2003, Lifetime Data Anal 9:345–355) implemented a formal estimating-equation approach that invokes large-sample theory. However, these asymptotic methods fail to yield sensible estimates for smaller samples. In this paper, we implement a Bayesian analysis based on MCMC computation that works equally well for large and small sample sizes. We give three simulated examples, studying the Bayesian results, the impact of changing prior specification, and empirical properties of the Bayesian estimators of the lifetime distribution parameters. We also study the Baxter (1994, Biometrika 81:567–577) data, and uncover structure that has not been commented upon previously.  相似文献   
136.
Jump–diffusion processes involving diffusion processes with discontinuous movements, called jumps, are widely used to model time-series data that commonly exhibit discontinuity in their sample paths. The existing jump–diffusion models have been recently extended to multivariate time-series data. The models are, however, still limited by a single parametric jump-size distribution that is common across different subjects. Such strong parametric assumptions for the shape and structure of a jump-size distribution may be too restrictive and unrealistic for multiple subjects with different characteristics. This paper thus proposes an efficient Bayesian nonparametric method to flexibly model a jump-size distribution while borrowing information across subjects in a clustering procedure using a nested Dirichlet process. For efficient posterior computation, a partially collapsed Gibbs sampler is devised to fit the proposed model. The proposed methodology is illustrated through a simulation study and an application to daily stock price data for companies in the S&P 100 index from June 2007 to June 2017.  相似文献   
137.
138.
This paper considers a connected Markov chain for sampling 3 × 3 ×K contingency tables having fixed two‐dimensional marginal totals. Such sampling arises in performing various tests of the hypothesis of no three‐factor interactions. A Markov chain algorithm is a valuable tool for evaluating P‐values, especially for sparse datasets where large‐sample theory does not work well. To construct a connected Markov chain over high‐dimensional contingency tables with fixed marginals, algebraic algorithms have been proposed. These algorithms involve computations in polynomial rings using Gröbner bases. However, algorithms based on Gröbner bases do not incorporate symmetry among variables and are very time‐consuming when the contingency tables are large. We construct a minimal basis for a connected Markov chain over 3 × 3 ×K contingency tables. The minimal basis is unique. Some numerical examples illustrate the practicality of our algorithms.  相似文献   
139.
The most common asymptotic procedure for analyzing a 2 × 2 table (under the conditioning principle) is the ‰ chi-squared test with correction for continuity (c.f.c). According to the way this is applied, up to the present four methods have been obtained: one for one-tailed tests (Yates') and three for two-tailed tests (those of Mantel, Conover and Haber). In this paper two further methods are defined (one for each case), the 6 resulting methods are grouped in families, their individual behaviour studied and the optimal is selected. The conclusions are established on the assumption that the method studied is applied indiscriminately (without being subjected to validity conditions), and taking a basis of 400,000 tables (with the values of sample size n between 20 and 300 and exact P-values between 1% and 10%) and a criterion of evaluation based on the percentage of times in which the approximate P-value differs from the exact (Fisher's exact test) by an excessive amount. The optimal c.f.c. depends on n, on E (the minimum quantity expected) and on the error α to be used, but the rule of selection is not complicated and the new methods proposed are frequently selected. In the paper we also study what occurs when E ≥ 5, as well as whether the chi-squared by factor (n-1).  相似文献   
140.
A brief review of the minimum discrimination information (MDI) approach in analyzing categorical data is presented in a question -answer format, An example is given to bring out situations in which the MDI approach is more useful. No new results are proved.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号