首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2424篇
  免费   54篇
  国内免费   8篇
管理学   106篇
民族学   3篇
人口学   23篇
丛书文集   54篇
理论方法论   24篇
综合类   593篇
社会学   24篇
统计学   1659篇
  2024年   1篇
  2023年   7篇
  2022年   9篇
  2021年   14篇
  2020年   43篇
  2019年   64篇
  2018年   79篇
  2017年   125篇
  2016年   60篇
  2015年   50篇
  2014年   98篇
  2013年   638篇
  2012年   197篇
  2011年   76篇
  2010年   99篇
  2009年   74篇
  2008年   90篇
  2007年   81篇
  2006年   90篇
  2005年   77篇
  2004年   66篇
  2003年   60篇
  2002年   56篇
  2001年   51篇
  2000年   48篇
  1999年   35篇
  1998年   31篇
  1997年   30篇
  1996年   28篇
  1995年   23篇
  1994年   11篇
  1993年   16篇
  1992年   15篇
  1991年   6篇
  1990年   5篇
  1989年   7篇
  1988年   6篇
  1987年   1篇
  1986年   3篇
  1985年   3篇
  1984年   1篇
  1983年   2篇
  1982年   1篇
  1981年   1篇
  1980年   3篇
  1979年   3篇
  1978年   2篇
排序方式: 共有2486条查询结果,搜索用时 359 毫秒
141.
A procedure for selecting a subset of predictor variables in regression analysis is suggested. The procedure is so designed that it leads to the selection of a subset of variables having an adequate degree of informativeness with a directly specified confidence coefficient. Some examples are considered to illustrate the application of the procedure.  相似文献   
142.
ABSTRACT

This paper addresses the problem of estimation of the population mean on the current (second) occasion in two-occasion successive sampling. Utilizing the readily available information on several auxiliary variables on both occasions and the information on the study variable from the previous occasion, an estimation procedure of the population mean on the current occasion has been proposed. Theoretical properties of the proposed estimator have been investigated. Optimum replacement policy to the proposed estimator has been discussed. The proposed estimator has been compared empirically with the sample mean estimator, when there is no matching and the optimum estimator which is a linear combination of the means of the matched and unmatched portions of the sample at the current occasion. Appropriate recommendations have been made for practical applications.  相似文献   
143.
Consider the linear regression model Y = Xθ+ ε where Y denotes a vector of n observations on the dependent variable, X is a known matrix, θ is a vector of parameters to be estimated and e is a random vector of uncorrelated errors. If X'X is nearly singular, that is if the smallest characteristic root of X'X s small then a small perurbation in the elements of X, such as due to measurement errors, induces considerable variation in the least squares estimate of θ. In this paper we examine for the asymptotic case when n is large the effect of perturbation with regard to the bias and mean squared error of the estimate.  相似文献   
144.
Human error is one of the significant factors contributing to accidents. Traditional human error probability (HEP) studies based on fuzzy number concepts are one of the contributions addressing such a problem. It is particularly useful under circumstances where the lack of data exists. However, the degree of the discriminability of such studies may be questioned when applied under circumstances where experts have adequate information and specific values can be determined in the abscissa of the membership function of linguistic terms, that is, the fuzzy data of each scenario considered are close to each other. In this article, a novel HEP assessment aimed at solving such a difficulty is proposed. Under the framework, the fuzzy data are equipped with linguistic terms and membership values. By establishing a rule base for data combination, followed by the defuzzification and HEP transformation processes, the HEP results can be acquired. The methodology is first examined using a test case consisting of three different scenarios of which the fuzzy data are close to each other. The results generated are compared with the outcomes produced from the traditional fuzzy HEP studies using the same test case. It is concluded that the methodology proposed in this study has a higher degree of the discriminability and is capable of providing more reasonable results. Furthermore, in situations where the lack of data exists, the proposed approach is also capable of providing the range of the HEP results based on different risk viewpoints arbitrarily established as illustrated using a real‐world example.  相似文献   
145.
A well-know process capability index is slightly modified in this article to provide a new measure of process capability which takes account of the process location and variability, and for which point estimator and confidence intervals do exist that are insensitive to departures from the assumption of normal variability. Two examples of applications based on real data are presented.  相似文献   
146.
Using Monte Carlo methods, the properties of systemwise generalisations of the Breusch-Godfrey test for autocorrelated errors are studied in situations when the error terms follow either normal or non-normal distributions, and when these errors follow either AR(1) or MA(1) processes. Edgerton and Shukur (1999) studied the properties of the test using normally distributed error terms and when these errors follow an AR(1) process. When the errors follow a non-normal distribution, the performances of the tests deteriorate especially when the tails are very heavy. The performances of the tests become better (as in the case when the errors are generated by the normal distribution) when the errors are less heavy tailed.  相似文献   
147.
Following the extension from linear mixed models to additive mixed models, extension from generalized linear mixed models to generalized additive mixed models is made, Algorithms are developed to compute the MLE's of the nonlinear effects and the covariance structures based on the penalized marginal likelihood. Convergence of the algorithms and selection of the smooth param¬eters are discussed.  相似文献   
148.
区间型符号数据的因子分析及其应用   总被引:1,自引:0,他引:1  
符号数据分析是一种新兴的数据挖掘技术,区间数是最常用的一种符号数据。基于误差分析理论,研究针对区间数据的因子分析方法。将区间数看作一个由中点和半径构成的有序偶,并将半径视为区间数的极限误差。对中点样本阵进行因子分析,得到因子得分的中点值。然后将半径样本阵按照误差传递公式,得到因子得分的极限误差。由因子得分的中点值和极限误差最终得到因子得分的区间值。最后以股票的市场综合表现评价问题为案例,进行了应用研究。  相似文献   
149.
A substantial degree of uncertainty exists surrounding the reconstruction of events based on memory recall. This form of measurement error affects the performance of structured interviews such as the Composite International Diagnostic Interview (CIDI), an important tool to assess mental health in the community. Measurement error probably explains the discrepancy in estimates between longitudinal studies with repeated assessments (the gold-standard), yielding approximately constant rates of depression, versus cross-sectional studies which often find increasing rates closer in time to the interview. Repeated assessments of current status (or recent history) are more reliable than reconstruction of a person's psychiatric history based on a single interview. In this paper, we demonstrate a method of estimating a time-varying measurement error distribution in the age of onset of an initial depressive episode, as diagnosed by the CIDI, based on an assumption regarding age-specific incidence rates. High-dimensional non-parametric estimation is achieved by the EM-algorithm with smoothing. The method is applied to data from a Norwegian mental health survey in 2000. The measurement error distribution changes dramatically from 1980 to 2000, with increasing variance and greater bias further away in time from the interview. Some influence of the measurement error on already published results is found.  相似文献   
150.
Uncertainty and sensitivity analysis is an essential ingredient of model development and applications. For many uncertainty and sensitivity analysis techniques, sensitivity indices are calculated based on a relatively large sample to measure the importance of parameters in their contributions to uncertainties in model outputs. To statistically compare their importance, it is necessary that uncertainty and sensitivity analysis techniques provide standard errors of estimated sensitivity indices. In this paper, a delta method is used to analytically approximate standard errors of estimated sensitivity indices for a popular sensitivity analysis method, the Fourier amplitude sensitivity test (FAST). Standard errors estimated based on the delta method were compared with those estimated based on 20 sample replicates. We found that the delta method can provide a good approximation for the standard errors of both first-order and higher-order sensitivity indices. Finally, based on the standard error approximation, we also proposed a method to determine a minimum sample size to achieve the desired estimation precision for a specified sensitivity index. The standard error estimation method presented in this paper can make the FAST analysis computationally much more efficient for complex models.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号