首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   25991篇
  免费   449篇
  国内免费   2篇
管理学   3721篇
民族学   112篇
人才学   1篇
人口学   2476篇
丛书文集   95篇
教育普及   2篇
理论方法论   2311篇
现状及发展   1篇
综合类   324篇
社会学   12118篇
统计学   5281篇
  2023年   127篇
  2021年   156篇
  2020年   384篇
  2019年   528篇
  2018年   614篇
  2017年   848篇
  2016年   650篇
  2015年   466篇
  2014年   644篇
  2013年   4312篇
  2012年   823篇
  2011年   811篇
  2010年   589篇
  2009年   536篇
  2008年   614篇
  2007年   616篇
  2006年   627篇
  2005年   601篇
  2004年   545篇
  2003年   503篇
  2002年   564篇
  2001年   675篇
  2000年   688篇
  1999年   608篇
  1998年   458篇
  1997年   417篇
  1996年   407篇
  1995年   382篇
  1994年   405篇
  1993年   373篇
  1992年   445篇
  1991年   430篇
  1990年   385篇
  1989年   368篇
  1988年   367篇
  1987年   323篇
  1986年   320篇
  1985年   352篇
  1984年   348篇
  1983年   305篇
  1982年   276篇
  1981年   221篇
  1980年   210篇
  1979年   251篇
  1978年   218篇
  1977年   199篇
  1976年   170篇
  1975年   194篇
  1974年   155篇
  1973年   131篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
941.
In most of the existing specialized literature, monitoring regression models are a special case of profile monitoring. However, not every regression model always represents appropriately a profile data structure. This is clearly the case of the Weibull regression model (WRM) with common shape parameter γ. Even though it might be thought that existing methodologies (especially likelihood-ratio (LRT)-based methods) for monitoring generalized linear profiles can also be successfully applied to monitoring regression models with time-to-event response, it will be shown in this paper that those methodologies work fairly acceptable just for data structures with 1000 observations at least approximately. It was found out that some corrections, often referred to as Bartlett's adjustments, are needed to be implemented in order to improve the accuracy of using the asymptotic distributional properties of the LRT statistic for carrying out the monitoring of WRM with relatively small and moderate dimensions of the available datasets. Simulation studies suggest that the use of the aforementioned corrections make the resulting charts work quite acceptable when available data structures contain 30 observations at least. Detection abilities of the proposed schemes improve as dataset dimension increases.  相似文献   
942.
We consider a semi-parametric approach to perform the joint segmentation of multiple series sharing a common functional part. We propose an iterative procedure based on Dynamic Programming for the segmentation part and Lasso estimators for the functional part. Our Lasso procedure, based on the dictionary approach, allows us to both estimate smooth functions and functions with local irregularity, which permits more flexibility than previous proposed methods. This yields to a better estimation of the functional part and improvements in the segmentation. The performance of our method is assessed using simulated data and real data from agriculture and geodetic studies. Our estimation procedure results to be a reliable tool to detect changes and to obtain an interpretable estimation of the functional part of the model in terms of known functions.  相似文献   
943.
Case–control design to assess the accuracy of a binary diagnostic test (BDT) is very frequent in clinical practice. This design consists of applying the diagnostic test to all of the individuals in a sample of those who have the disease and in another sample of those who do not have the disease. The sensitivity of the diagnostic test is estimated from the case sample and the specificity is estimated from the control sample. Another parameter which is used to assess the performance of a BDT is the weighted kappa coefficient. The weighted kappa coefficient depends on the sensitivity and specificity of the diagnostic test, on the disease prevalence and on the weighting index. In this article, confidence intervals are studied for the weighted kappa coefficient subject to a case–control design and a method is proposed to calculate the sample sizes to estimate this parameter. The results obtained were applied to a real example.  相似文献   
944.
A significant challenge in fitting metamodels of large-scale simulations with sufficient accuracy is in the computational time required for rigorous statistical validation. This paper addresses the statistical computation issues associated with the Bootstrap and modified PRESS statistic, which yield key metrics for error measurements in metamodelling validation. Experimentation is performed on different programming languages, namely, MATLAB, R, and Python, and implemented on different computing architectures including traditional multicore personal computers and high-power clusters with parallel computing capabilities. This study yields insight into the effect that programming languages and computing architecture have on the computational time for simulation metamodel validation. The experimentation is performed across two scenarios with varying complexity.  相似文献   
945.
We study objective Bayesian inference for linear regression models with residual errors distributed according to the class of two-piece scale mixtures of normal distributions. These models allow for capturing departures from the usual assumption of normality of the errors in terms of heavy tails, asymmetry, and certain types of heteroscedasticity. We propose a general non-informative, scale-invariant, prior structure and provide sufficient conditions for the propriety of the posterior distribution of the model parameters, which cover cases when the response variables are censored. These results allow us to apply the proposed models in the context of survival analysis. This paper represents an extension to the Bayesian framework of the models proposed in [16]. We present a simulation study that shows good frequentist properties of the posterior credible intervals as well as point estimators associated to the proposed priors. We illustrate the performance of these models with real data in the context of survival analysis of cancer patients.  相似文献   
946.
The varying coefficient (VC) model introduced by Hastie and Tibshirani [26 T. Hastie and R. Tibshirani, Varying-coefficient models, J. R. Statist. Soc. (Ser. B) 55 (1993), pp. 757796.[Web of Science ®] [Google Scholar]] is arguably one of the most remarkable recent developments in nonparametric regression theory. The VC model is an extension of the ordinary regression model where the coefficients are allowed to vary as smooth functions of an effect modifier possibly different from the regressors. The VC model reduces the modelling bias with its unique structure while also avoiding the ‘curse of dimensionality’ problem. While the VC model has been applied widely in a variety of disciplines, its application in economics has been minimal. The central goal of this paper is to apply VC modelling to the estimation of a hedonic house price function using data from Hong Kong, one of the world's most buoyant real estate markets. We demonstrate the advantages of the VC approach over traditional parametric and semi-parametric regressions in the face of a large number of regressors. We further combine VC modelling with quantile regression to examine the heterogeneity of the marginal effects of attributes across the distribution of housing prices.  相似文献   
947.
948.
In this study, some methods suggested for binary repeated measures, namely, Weighted Least Squares (WLS), Generalized Estimating Equations (GEE), and Generalized Linear Mixed Models (GLMM) are compared with respect to power, type 1 error, and properties of estimates. The results indicate that with adequate sample size, no missing data, the only covariate being time effect, and a relatively limited number of time points, the WLS method performs well. The GEE approach performs well only for large sample sizes. The GLMM method is satisfactory with respect to type I error, but its estimates have poorer properties than the other methods.  相似文献   
949.
In semidefinite programming (SDP), we minimize a linear objective function subject to a linear matrix being positive semidefinite. A powerful program, SeDuMi, has been developed in MATLAB to solve SDP problems. In this article, we show in detail how to formulate A-optimal and E-optimal design problems as SDP problems and solve them by SeDuMi. This technique can be used to construct approximate A-optimal and E-optimal designs for all linear and nonlinear regression models with discrete design spaces. In addition, the results on discrete design spaces provide useful guidance for finding optimal designs on any continuous design space, and a convergence result is derived. Moreover, restrictions in the designs can be easily incorporated in the SDP problems and solved by SeDuMi. Several representative examples and one MATLAB program are given.  相似文献   
950.
This paper compares the performance between regression analysis and a clustering based neural network approach when the data deviates from the homoscedasticity assumption of regression. Heteroskedasticity is a problem that arises in linear regression due to the unequal error variances. One of the methods to deal heteroskedasticity in classical regression theory is weighted least-square regression (WLS). In order to deal the problem of heteroskedasticity, backpropagation neural network is applied. In this context, an algorithm is proposed which is based on robust estimates of location and dispersion matrix that helps in preserving the error assumption of the linear regression. Analysis is carried out with appropriate designs using simulated data and the results are presented.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号