首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1726篇
  免费   43篇
  国内免费   10篇
管理学   303篇
民族学   13篇
人口学   198篇
丛书文集   9篇
理论方法论   137篇
综合类   47篇
社会学   674篇
统计学   398篇
  2023年   17篇
  2022年   14篇
  2021年   11篇
  2020年   39篇
  2019年   39篇
  2018年   45篇
  2017年   84篇
  2016年   47篇
  2015年   36篇
  2014年   42篇
  2013年   343篇
  2012年   67篇
  2011年   60篇
  2010年   33篇
  2009年   42篇
  2008年   40篇
  2007年   43篇
  2006年   36篇
  2005年   52篇
  2004年   42篇
  2003年   30篇
  2002年   36篇
  2001年   48篇
  2000年   36篇
  1999年   34篇
  1998年   29篇
  1997年   24篇
  1996年   24篇
  1995年   19篇
  1994年   25篇
  1993年   23篇
  1992年   26篇
  1991年   13篇
  1990年   15篇
  1989年   24篇
  1988年   25篇
  1987年   19篇
  1986年   21篇
  1985年   25篇
  1984年   19篇
  1983年   21篇
  1982年   19篇
  1981年   22篇
  1980年   14篇
  1979年   5篇
  1978年   6篇
  1977年   6篇
  1976年   9篇
  1973年   6篇
  1971年   4篇
排序方式: 共有1779条查询结果,搜索用时 0 毫秒
31.
This paper considers the analysis of time to event data in the presence of collinearity between covariates. In linear and logistic regression models, the ridge regression estimator has been applied as an alternative to the maximum likelihood estimator in the presence of collinearity. The advantage of the ridge regression estimator over the usual maximum likelihood estimator is that the former often has a smaller total mean square error and is thus more precise. In this paper, we generalized this approach for addressing collinearity to the Cox proportional hazards model. Simulation studies were conducted to evaluate the performance of the ridge regression estimator. Our approach was motivated by an occupational radiation study conducted at Oak Ridge National Laboratory to evaluate health risks associated with occupational radiation exposure in which the exposure tends to be correlated with possible confounders such as years of exposure and attained age. We applied the proposed methods to this study to evaluate the association of radiation exposure with all-cause mortality.  相似文献   
32.
33.
The quasilikelihood estimator is widely used in data analysis where a likelihood is not available. We illustrate that with a given variance function it is not only conservative, in minimizing a maximum risk, but also robust against a possible misspecification of either the likelihood or cumulants of the model. In examples it is compared with estimators based on maximum likelihood and quadratic estimating functions.  相似文献   
34.
35.
This paper describes an innovative application of statistical process control to the online remote control of the UK's gas transportation networks. The gas industry went through a number of changes in ownership, regulation, access to networks, organization and management culture in the 1990s. The application of SPC was motivated by these changes along with the desire to apply the best industrial statistics theory to practical problems. The work was initiated by a studentship, with the technology gradually being transferred to the industry. The combined efforts of control engineers and statisticians helped develop a novel SPC system. Having set up the control limits, a system was devised to automatically update and publish the control charts on a daily basis. The charts and an associated discussion forum are available to both managers and control engineers throughout the country at their desktop PCs. The paper describes methods of involving people to design first-class systems to achieve continual process improvement. It describes how the traditional benefits of SPC can be realized in a 'distal team working', and 'soft systems', context of four Area Control Centres, controlling a system delivering two thirds of the UK's energy needs.  相似文献   
36.
Many companies are trying to get to the bottom of what their main objectives are and what their business should be doing. The new Six Sigma approach concentrates on clarifying business strategy and making sure that everything relates to company objectives. It is vital to clarify each part of the business in such a way that everyone can understand the causes of variation that can lead to improvements in processes and performance. This paper describes a situation where the full implementation of SPC methodology has made possible a visual and widely appreciated summary of the performance of one important aspect of the business. The major part of the work was identifying the core objectives and deciding how to encapsulate each of them in one or more suitable measurements. The next step was to review the practicalities of obtaining the measurements and their reliability and representativeness. Finally, the measurements were presented in chart form and the more traditional steps of SPC analysis were commenced. Data from fast changing business environments are prone to many different problems, such as the short previous span of typical data, strange distributions and other uncertainties. Issues surrounding these and the eventual extraction of a meaningful set of information will be discussed in the paper. The measurement framework has proved very useful and, from an initial circulation of a handful of people, it now forms an important part of an information process that provides responsible managers with valuable control information. The measurement framework is kept fresh and vital by constant review and modifications. Improved electronic data collection and dissemination of the report has proved very important.  相似文献   
37.
Well-known estimation methods such as conditional least squares, quasilikelihood and maximum likelihood (ML) can be unified via a single framework of martingale estimating functions (MEFs). Asymptotic distributions of estimates for ergodic processes use constant norm (e.g. square root of the sample size) for asymptotic normality. For certain non-ergodic-type applications, however, such as explosive autoregression and super-critical branching processes, one needs a random norm in order to get normal limit distributions. In this paper, we are concerned with non-ergodic processes and investigate limit distributions for a broad class of MEFs. Asymptotic optimality (within a certain class of non-ergodic MEFs) of the ML estimate is deduced via establishing a convolution theorem using a random norm. Applications to non-ergodic autoregressive processes, generalized autoregressive conditional heteroscedastic-type processes, and super-critical branching processes are discussed. Asymptotic optimality in terms of the maximum random limiting power regarding large sample tests is briefly discussed.  相似文献   
38.
A model-based classification technique is developed, based on mixtures of multivariate t-factor analyzers. Specifically, two related mixture models are developed and their classification efficacy studied. An AECM algorithm is used for parameter estimation, and convergence of these algorithms is determined using Aitken's acceleration. Two different techniques are proposed for model selection: the BIC and the ICL. Our classification technique is applied to data on red wine samples from Italy and to fatty acid measurements on Italian olive oils. These results are discussed and compared to more established classification techniques; under this comparison, our mixture models give excellent classification performance.  相似文献   
39.
This article aims to estimate the parameters of the Weibull distribution in step-stress partially accelerated life tests under multiply censored data. The step partially acceleration life test is that all test units are first run simultaneously under normal conditions for a pre-specified time, and the surviving units are then run under accelerated conditions until a predetermined censoring time. The maximum likelihood estimates are used to obtaining the parameters of the Weibull distribution and the acceleration factor under multiply censored data. Additionally, the confidence intervals for the estimators are obtained. Simulation results show that the maximum likelihood estimates perform well in most cases in terms of the mean bias, errors in the root mean square and the coverage rate. An example is used to illustrate the performance of the proposed approach.  相似文献   
40.
The boxplot is an effective data-visualization tool useful in diverse applications and disciplines. Although more sophisticated graphical methods exist, the boxplot remains relevant due to its simplicity, interpretability, and usefulness, even in the age of big data. This article highlights the origins and developments of the boxplot that is now widely viewed as an industry standard as well as its inherent limitations when dealing with data from skewed distributions, particularly when detecting outliers. The proposed Ratio-Skewed boxplot is shown to be practical and suitable for outlier labeling across several parametric distributions.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号