首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   523篇
  免费   12篇
  国内免费   4篇
管理学   14篇
民族学   1篇
人口学   10篇
丛书文集   8篇
理论方法论   2篇
综合类   142篇
社会学   10篇
统计学   352篇
  2022年   5篇
  2021年   5篇
  2020年   3篇
  2019年   10篇
  2018年   9篇
  2017年   18篇
  2016年   9篇
  2015年   8篇
  2014年   11篇
  2013年   163篇
  2012年   35篇
  2011年   24篇
  2010年   15篇
  2009年   16篇
  2008年   24篇
  2007年   24篇
  2006年   22篇
  2005年   23篇
  2004年   10篇
  2003年   17篇
  2002年   13篇
  2001年   8篇
  2000年   10篇
  1999年   9篇
  1998年   9篇
  1997年   4篇
  1996年   6篇
  1995年   4篇
  1994年   4篇
  1993年   5篇
  1992年   1篇
  1991年   3篇
  1990年   2篇
  1989年   2篇
  1988年   1篇
  1984年   1篇
  1983年   1篇
  1981年   2篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有539条查询结果,搜索用时 656 毫秒
81.
Cross-classified data are often obtained in controlled experimental situations and in epidemiologic studies. As an example of the latter, occupational health studies sometimes require personal exposure measurements on a random sample of workers from one or more job groups, in one or more plant locations, on several different sampling dates. Because the marginal distributions of exposure data from such studies are generally right-skewed and well-approximated as lognormal, researchers in this area often consider the use of ANOVA models after a logarithmic transformation. While it is then of interest to estimate original-scale population parameters (e.g., the overall mean and variance), standard candidates such as maximum likelihood estimators (MLEs) can be unstable and highly biased. Uniformly minimum variance unbiased (UMVU) cstiniators offer a viable alternative, and are adaptable to sampling schemes that are typiral of experimental or epidemiologic studies. In this paper, we provide UMVU estimators for the mean and variance under two random effects ANOVA models for logtransformed data. We illustrate substantial mean squared error gains relative to the MLE when estimating the mean under a one-way classification. We illustrate that the results can readily be extended to encompass a useful class of purely random effects models, provided that the study data are balanced.  相似文献   
82.
Book Reviews     
The diagnostic tools examined in this article are applicable to regressions estimated with panel data or cross-sectional data drawn from a population with grouped structure. The diagnostic tools considered include (a) tests for the existence of group effects under both fixed and random effects models, (b) checks for outlying groups, and (c) specification tests for comparing the fixed and random effects models. A group-specific counterpart to the studentized residual is introduced. The methods are illustrated using a hedonic housing price regression.  相似文献   
83.
The principal components analysis (PCA) in the frequency domain of a stationary p-dimensional time series (X n ) n∈? leads to a summarizing time series written as a linear combination series X n =∑ m C m ° X n?m . Therefore, we observe that, when the coefficients C m , m≠0, are close to 0, this PCA is close to the usual PCA, that is the PCA in the temporal domain. When the coefficients tend to 0, the corresponding limit is said to satisfy a property noted 𝒫, of which we will study the consequences. Finally, we will examine, for any series, the proximity between the two PCAs.  相似文献   
84.
Abstract

This paper considers the optimization problems for a consecutive-2-out-of-n:G system where n is considered to be fixed or random. When the number of components is constant, the optimal number of components and the optimal replacement time are discussed by minimizing the expected cost rates. Furthermore, we focus on the above discussions again when n is a random variable. We give an approximate value of MTTF and propose the preventive replacement policy, respectively.  相似文献   
85.
Evaluators are challenged to keep pace with the vast array of Veteran support programs operating in the United States, resulting in a situation in which many programs lack any evidence of impact. Due to this lack of evidence, there is no efficient way to suggest which programs are most effective in helping Veterans in need of support. One potential solution to this dilemma is to reconceptualize program evaluation, by moving away from evaluating programs individually to evaluating what is common across programs. The Common Components Analysis (CCA) is one such technique that aggregates findings from programs that have undergone rigorous evaluation at the level of program components (e.g., content, process, barrier reduction). Given that many Veteran programs lack outcome evidence from rigorous studies, an adaptation to CCA is needed. This report examines cross-sectional data from a pilot study using an adapted CCA across five domains of well-being (i.e., employment, education, legal/financial/housing, mental/physical health, and social/personal relationships). The purpose of this preliminary study is to determine the feasibility of eliciting program nominations and program components from Veterans via an online survey. When coupled with a longitudinal research design, this adaptation to CCA will allow for stronger causal claims about the expected impact of different program components within and across a variety of domains.  相似文献   
86.
87.
This paper develops a new methodology that makes use of the factor structure of large dimensional panels to understand the nature of nonstationarity in the data. We refer to it as PANIC—Panel Analysis of Nonstationarity in Idiosyncratic and Common components. PANIC can detect whether the nonstationarity in a series is pervasive, or variable‐specific, or both. It can determine the number of independent stochastic trends driving the common factors. PANIC also permits valid pooling of individual statistics and thus panel tests can be constructed. A distinctive feature of PANIC is that it tests the unobserved components of the data instead of the observed series. The key to PANIC is consistent estimation of the space spanned by the unobserved common factors and the idiosyncratic errors without knowing a priori whether these are stationary or integrated processes. We provide a rigorous theory for estimation and inference and show that the tests have good finite sample properties.  相似文献   
88.
In this paper we develop some econometric theory for factor models of large dimensions. The focus is the determination of the number of factors (r), which is an unresolved issue in the rapidly growing literature on multifactor models. We first establish the convergence rate for the factor estimates that will allow for consistent estimation of r. We then propose some panel criteria and show that the number of factors can be consistently estimated using the criteria. The theory is developed under the framework of large cross‐sections (N) and large time dimensions (T). No restriction is imposed on the relation between N and T. Simulations show that the proposed criteria have good finite sample properties in many configurations of the panel data encountered in practice.  相似文献   
89.
The introduction of software to calculate maximum likelihood estimates for mixed linear models has made likelihood estimation a practical alternative to methods based on sums of squares. Likelihood based tests and confidence intervals, however, may be misleading in problems with small sample sizes. This paper discusses an adjusted version of the directed log-likelihood statistic for mixed models that is highly accurate for testing one parameter hypotheses. Indroduced by Skovgaard (1996, Journal of the Bernoulli Society,2,145-165), we show in mixed models that the statistic has a simple conpact from that may be obtained from standard software. Simulation studies indicate that this statistic is more accurate than many of the specialized procedure that have been advocated.  相似文献   
90.
This paper describes two new, mathematical programming-based approaches for evaluating general, one- and two-sidedp-variate normal probabilities where the variance-covariance matrix (of arbitrary structure) is singular with rankr(r<pand r and p can be of unlimited dimensions. In both cases, principal components are used to transform the original, ill-definedp-dimensional integral into a well-definedrdimensional integral over a convex polyhedron. The first algorithm that is presented uses linear programming coupled with a Gauss-Legendre quadrature scheme to compute this integral, while the second algorithm uses multi-parametric programming techniques in order to significantly reduce the number of optimization problems that need to be solved. The application of the algorithms is demonstrated and aspects of computational performance are discussed through a number of examples, ranging from a practical problem that arises in chemical engineering to larger, numerical examples.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号