首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2286篇
  免费   69篇
管理学   266篇
民族学   20篇
人口学   244篇
丛书文集   12篇
理论方法论   268篇
综合类   27篇
社会学   1018篇
统计学   500篇
  2023年   27篇
  2022年   8篇
  2021年   29篇
  2020年   65篇
  2019年   94篇
  2018年   113篇
  2017年   134篇
  2016年   81篇
  2015年   61篇
  2014年   77篇
  2013年   461篇
  2012年   97篇
  2011年   86篇
  2010年   73篇
  2009年   59篇
  2008年   71篇
  2007年   69篇
  2006年   68篇
  2005年   58篇
  2004年   42篇
  2003年   50篇
  2002年   45篇
  2001年   44篇
  2000年   34篇
  1999年   38篇
  1998年   22篇
  1997年   20篇
  1996年   23篇
  1995年   22篇
  1994年   22篇
  1993年   24篇
  1992年   26篇
  1991年   25篇
  1990年   10篇
  1989年   13篇
  1988年   12篇
  1987年   11篇
  1986年   8篇
  1985年   15篇
  1984年   17篇
  1983年   13篇
  1982年   10篇
  1981年   17篇
  1980年   10篇
  1979年   8篇
  1978年   5篇
  1976年   5篇
  1975年   8篇
  1974年   9篇
  1966年   3篇
排序方式: 共有2355条查询结果,搜索用时 0 毫秒
991.
In this paper we explore statistical properties of some difference-based approaches to estimate an error variance for small sample based on nonparametric regression which satisfies Lipschitz condition. Our study is motivated by Tong and Wang (2005), who estimated error variance using a least squares approach. They considered the error variance as the intercept in a simple linear regression which was obtained from the expectation of their lag-k Rice estimator. Their variance estimators are highly dependent on the setting of a regressor and weight of their simple linear regression. Although this regressor and weight can be varied based on the characteristic of an unknown nonparametric mean function, Tong and Wang (2005) have used a fixed regressor and weight in a large sample and gave no indication of how to determine the regressor and the weight. In this paper, we propose a new approach via local quadratic approximation to determine this regressor and weight. Using our proposed regressor and weight, we estimate the error variance as the intercept of simple linear regression using both ordinary least squares and weighted least squares. Our approach applies to both small and large samples, while most existing difference-based methods are appropriate solely for large samples. We compare the performance of our approach with other existing approaches using extensive simulation study. The advantage of our approach is demonstrated using a real data set.  相似文献   
992.
We propose a new method to test the order between two high-dimensional mean curves. The new statistic extends the approach of Follmann (1996) to high-dimensional data by adapting the strategy of Bai and Saranadasa (1996). The proposed procedure is an alternative to the non-negative basis matrix factorization (NBMF) based test of Lee et al. (2008) for the same hypothesis, but it is much easier to implement. We derive the asymptotic mean and variance of the proposed test statistic under the null hypothesis of equal mean curves. Based on theoretical results, we put forward a permutation procedure to approximate the null distribution of the new test statistic. We compare the power of the proposed test with that of the NBMF-based test via simulations. We illustrate the approach by an application to tidal volume traces.  相似文献   
993.
We propose a new type of stochastic ordering which imposes a monotone tendency in differences between one multinomial probability and a known standard one. An estimation procedure is proposed for the constrained maximum likelihood estimate, and then the asymptotic null distribution is derived for the likelihood ratio test statistic for testing equality of two multinomial distributions against the new stochastic ordering. An alternative test is also discussed based on Neyman modified minimum chi-square estimator. These tests are illustrated with a set of heart disease data.  相似文献   
994.
Hierarchical generalized linear models (HGLMs) have become popular in data analysis. However, their maximum likelihood (ML) and restricted maximum likelihood (REML) estimators are often difficult to compute, especially when the random effects are correlated; this is because obtaining the likelihood function involves high-dimensional integration. Recently, an h-likelihood method that does not involve numerical integration has been proposed. In this study, we show how an h-likelihood method can be implemented by modifying the existing ML and REML procedures. A small simulation study is carried out to investigate the performances of the proposed methods for HGLMs with correlated random effects.  相似文献   
995.
This article investigates statistical inferences about differences of covariances matrices when the response has more than two values. The subspace constructed by differences of covariance matrices is related to the sufficient dimension subspace and the central space. The asymptotic distribution of test statistic for structural dimension is outlined.  相似文献   
996.
We consider statistical procedures for feature selection defined by a family of regularization problems with convex piecewise linear loss functions and penalties of l 1 nature. Many known statistical procedures (e.g. quantile regression and support vector machines with l 1-norm penalty) are subsumed under this category. Computationally, the regularization problems are linear programming (LP) problems indexed by a single parameter, which are known as ‘parametric cost LP’ or ‘parametric right-hand-side LP’ in the optimization theory. Exploiting the connection with the LP theory, we lay out general algorithms, namely, the simplex algorithm and its variant for generating regularized solution paths for the feature selection problems. The significance of such algorithms is that they allow a complete exploration of the model space along the paths and provide a broad view of persistent features in the data. The implications of the general path-finding algorithms are outlined for several statistical procedures, and they are illustrated with numerical examples.  相似文献   
997.
Data depth provides a natural means to rank multivariate vectors with respect to an underlying multivariate distribution. Most existing depth functions emphasize a centre-outward ordering of data points, which may not provide a useful geometric representation of certain distributional features, such as multimodality, of concern to some statistical applications. Such inadequacy motivates us to develop a device for ranking data points according to their “representativeness” rather than “centrality” with respect to an underlying distribution of interest. Derived essentially from a choice of goodness-of-fit test statistic, our device calls for a new interpretation of “depth” more akin to the concept of density than location. It copes particularly well with multivariate data exhibiting multimodality. In addition to providing depth values for individual data points, depth functions derived from goodness-of-fit tests also extend naturally to provide depth values for subsets of data points, a concept new to the data-depth literature.  相似文献   
998.
Studying the effect of exposure or intervention on a dichotomous outcome is very common in medical research. Logistic regression (LR) is often used to determine such association which provides odds ratio (OR). OR often overestimates the effect size for prevalent outcome data. In such situations, use of relative risk (RR) has been suggested. We propose modifications in Zhang and Yu and Diaz-Quijano methods. These methods were compared with stratified Mantel Haenszel method, LR, log binomial regression (LBR), Zhang and Yu method, Poisson/Cox regression, modified Poisson/Cox regression, marginal probability method, COPY method, inverse probability of treatment weighted LBR, and Diaz-Quijano method. Our proposed modified Diaz-Quijano (MDQ) method provides RR and its confidence interval similar to those estimated by modified Poisson/Cox and LBRs. The proposed modifications in Zhang and Yu method provides better estimate of RR and its standard error as compared to Zhang and Yu method in a variety of situations with prevalent outcome. The MDQ method can be used easily to estimate the RR and its confidence interval in the studies which require reporting of RRs. Regression models which directly provide the estimate of RR without convergence problems such as the MDQ method and modified Poisson/Cox regression should be preferred.  相似文献   
999.
In this article, comparison of several population proportions using multiple decision approach is studied. The probability of the order of the sample proportions matching with the order of the population proportions is being controlled. A related multiple comparison procedure with a control is also discussed. For ranking the proportions in multinomial distribution, the simultaneous confidence interval is constructed and used for the ranking. Some examples are used to illustrate the multiple decision procedures discussed in this paper.  相似文献   
1000.
We consider a device that is designed to perform missions consisting of a random sequence of phases or stages with random durations. The mission process is described by a Markov renewal process and the system is a complex one consisting of a number of components whose lifetimes depend on the phases of the mission. We discuss models and tools to compute system, mission, and phase reliabilities using Markov renewal theory. A simplified model involving a mission-based system with maximal repair is analyzed first, and the results are then extended to the case where there is no repair using intrinsic aging concepts. Our objective is to focus on computation of system reliability for these two possible extreme cases.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号