首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   365篇
  免费   4篇
管理学   13篇
人口学   1篇
丛书文集   1篇
理论方法论   1篇
综合类   44篇
社会学   1篇
统计学   308篇
  2023年   1篇
  2021年   2篇
  2020年   1篇
  2019年   11篇
  2018年   15篇
  2017年   15篇
  2016年   15篇
  2015年   8篇
  2014年   9篇
  2013年   90篇
  2012年   34篇
  2011年   7篇
  2010年   10篇
  2009年   10篇
  2008年   20篇
  2007年   11篇
  2006年   6篇
  2005年   9篇
  2004年   5篇
  2003年   6篇
  2002年   5篇
  2001年   6篇
  2000年   4篇
  1999年   7篇
  1998年   11篇
  1997年   6篇
  1996年   5篇
  1995年   3篇
  1994年   4篇
  1993年   2篇
  1992年   3篇
  1991年   1篇
  1990年   2篇
  1989年   7篇
  1988年   3篇
  1987年   2篇
  1986年   2篇
  1985年   3篇
  1984年   1篇
  1983年   2篇
  1981年   2篇
  1980年   1篇
  1978年   1篇
  1973年   1篇
排序方式: 共有369条查询结果,搜索用时 15 毫秒
81.
The ordinary least squares (OLS)estimator of regression coeffecient is implicitly based on I.I.D.assumption, which is rarely satisfied by survey data. Many approaches are proposed in the literature which can be classified in two broad categories as model based and design consistent.Du Mouchel and Duncan (1983) proposed a test statistic λwhich helps in testing the ignorability of sampling weights.In this article a preliminary test estimator based on λ is proposed. The model based properties of this estimator has been invetigated theoritically where as to study the design based properties simulation approach is adopted. It has been observed that the proposed estimator is a better cimpromise between model based and randomization based inferential frame work.  相似文献   
82.
This paper is concerned with an empirical investigation of a common and important type of computer system and usage in business applications, viz., querying, updating and modifying a large database. Specifically, it describes the analysis of data collected from such an application and addresses two issues. Firstly, it assesses the applicability of statistical assumptions that underlie certain widely-used queueing theory models for computer systems usage. Secondly, it investigates measures of usage, as well as relationships among them, that may serve as appropriate bases for a pricing scheme for usage of computer systems of the type considered here.  相似文献   
83.
Humans are continuously exposed to chemicals with suspected or proven endocrine disrupting chemicals (EDCs). Risk management of EDCs presents a major unmet challenge because the available data for adverse health effects are generated by examining one compound at a time, whereas real‐life exposures are to mixtures of chemicals. In this work, we integrate epidemiological and experimental evidence toward a whole mixture strategy for risk assessment. To illustrate, we conduct the following four steps in a case study: (1) identification of single EDCs (“bad actors”)—measured in prenatal blood/urine in the SELMA study—that are associated with a shorter anogenital distance (AGD) in baby boys; (2) definition and construction of a “typical” mixture consisting of the “bad actors” identified in Step 1; (3) experimentally testing this mixture in an in vivo animal model to estimate a dose–response relationship and determine a point of departure (i.e., reference dose [RfD]) associated with an adverse health outcome; and (4) use a statistical measure of “sufficient similarity” to compare the experimental RfD (from Step 3) to the exposure measured in the human population and generate a “similar mixture risk indicator” (SMRI). The objective of this exercise is to generate a proof of concept for the systematic integration of epidemiological and experimental evidence with mixture risk assessment strategies. Using a whole mixture approach, we could find a higher rate of pregnant women under risk (13%) when comparing with the data from more traditional models of additivity (3%), or a compound‐by‐compound strategy (1.6%).  相似文献   
84.
This paper introduces a sampling plan for finite populations herein called “variable size simple random sampling” and compares properties of estimators based on it with results from the usual fixed size simple random sampling without replacement. Necessary and sufficient conditions (in the spirit of Hajek (1960)) for the limiting distribution of the sample total (or sample mean) to be normal are given.  相似文献   
85.
In this paper, we examine a nonlinear regression (NLR) model with homoscedastic errors which follows a flexible class of two-piece distributions based on the scale mixtures of normal (TP-SMN) family. The objective of using this family is to develop a robust NLR model. The TP-SMN is a rich class of distributions that covers symmetric/asymmetric and lightly/heavy-tailed distributions and is an alternative family to the well-known scale mixtures of skew-normal (SMSN) family studied by Branco and Dey [35]. A key feature of this study is using a new suitable hierarchical representation of the family to obtain maximum-likelihood estimates of model parameters via an EM-type algorithm. The performances of the proposed robust model are demonstrated using simulated and some natural real datasets and also compared to other well-known NLR models.  相似文献   
86.
The property of identifiability is an important consideration on estimating the parameters in a mixture of distributions. Also classification of a random variable based on a mixture can be meaning fully discussed only if the class of all finite mixtures is identifiable. The problem of identifiability of finite mixture of Gompertz distributions is studied. A procedure is presented for finding maximum likelihood estimates of the parameters of a mixture of two Gompertz distributions, using classified and unclassified observations. Based on small sample size, estimation of a nonlinear discriminant function is considered. Throughout simulation experiments, the performance of the corresponding estimated nonlinear discriminant function is investigated.  相似文献   
87.
Calibration techniques in survey sampling, such as generalized regression estimation (GREG), were formalized in the 1990s to produce efficient estimators of linear combinations of study variables, such as totals or means. They implicitly lie on the assumption of a linear regression model between the variable of interest and some auxiliary variables in order to yield estimates with lower variance if the model is true and remaining approximately design-unbiased even if the model does not hold. We propose a new class of model-assisted estimators obtained by releasing a few calibration constraints and replacing them with a penalty term. This penalization is added to the distance criterion to minimize. By introducing the concept of penalized calibration, combining usual calibration and this ‘relaxed’ calibration, we are able to adjust the weight given to the available auxiliary information. We obtain a more flexible estimation procedure giving better estimates particularly when the auxiliary information is overly abundant or not fully appropriate to be completely used. Such an approach can also be seen as a design-based alternative to the estimation procedures based on the more general class of mixed models, presenting new prospects in some scopes of application such as inference on small domains.  相似文献   
88.
Summary.  Free-living individuals have multifaceted diets and consume foods in numerous combinations. In epidemiological studies it is desirable to characterize individual diets not only in terms of the quantity of individual dietary components but also in terms of dietary patterns. We describe the conditional Gaussian mixture model for dietary pattern analysis and show how it can be adapted to take account of important characteristics of self-reported dietary data. We illustrate this approach with an analysis of the 2000–2001 National Diet and Nutrition Survey of adults. The results strongly favoured a mixture model solution allowing clusters to vary in shape and size, over the standard approach that has been used previously to find dietary patterns.  相似文献   
89.
Over the last decade the use of trans-dimensional sampling algorithms has become endemic in the statistical literature. In spite of their application however, there are few reliable methods to assess whether the underlying Markov chains have reached their stationary distribution. In this article we present a distance-based method for the comparison of trans-dimensional Markov chain sample output for a broad class of models. This diagnostic will simultaneously assess deviations between and within chains. Illustration of the analysis of Markov chain sample-paths is presented in simulated examples and in two common modelling situations: a finite mixture analysis and a change-point problem.  相似文献   
90.
A finite mixture model using the Student's t distribution has been recognized as a robust extension of normal mixtures. Recently, a mixture of skew normal distributions has been found to be effective in the treatment of heterogeneous data involving asymmetric behaviors across subclasses. In this article, we propose a robust mixture framework based on the skew t distribution to efficiently deal with heavy-tailedness, extra skewness and multimodality in a wide range of settings. Statistical mixture modeling based on normal, Student's t and skew normal distributions can be viewed as special cases of the skew t mixture model. We present analytically simple EM-type algorithms for iteratively computing maximum likelihood estimates. The proposed methodology is illustrated by analyzing a real data example.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号