首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12177篇
  免费   7篇
  国内免费   19篇
管理学   1775篇
民族学   112篇
人口学   2775篇
丛书文集   12篇
理论方法论   564篇
综合类   464篇
社会学   5105篇
统计学   1396篇
  2023年   3篇
  2021年   3篇
  2020年   3篇
  2019年   4篇
  2018年   1678篇
  2017年   1695篇
  2016年   1157篇
  2015年   59篇
  2014年   43篇
  2013年   60篇
  2012年   483篇
  2011年   1274篇
  2010年   1069篇
  2009年   794篇
  2008年   837篇
  2007年   1008篇
  2006年   15篇
  2005年   786篇
  2004年   534篇
  2003年   359篇
  2002年   123篇
  2001年   84篇
  2000年   42篇
  1999年   6篇
  1998年   8篇
  1996年   29篇
  1993年   1篇
  1992年   1篇
  1991年   8篇
  1990年   1篇
  1989年   3篇
  1988年   9篇
  1986年   1篇
  1985年   3篇
  1984年   2篇
  1982年   1篇
  1981年   1篇
  1980年   1篇
  1979年   2篇
  1978年   1篇
  1977年   1篇
  1976年   1篇
  1975年   2篇
  1973年   1篇
  1972年   1篇
  1971年   1篇
  1968年   3篇
  1966年   1篇
  1964年   1篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
281.
The skew normal distribution of Azzalini (Scand J Stat 12:171–178, 1985) has been found suitable for unimodal density but with some skewness present. Through this article, we introduce a flexible extension of the Azzalini (Scand J Stat 12:171–178, 1985) skew normal distribution based on a symmetric component normal distribution (Gui et al. in J Stat Theory Appl 12(1):55–66, 2013). The proposed model can efficiently capture the bimodality, skewness and kurtosis criteria and heavy-tail property. The paper presents various basic properties of this family of distributions and provides two stochastic representations which are useful for obtaining theoretical properties and to simulate from the distribution. Further, maximum likelihood estimation of the parameters is studied numerically by simulation and the distribution is investigated by carrying out comparative fitting of three real datasets.  相似文献   
282.
Survey statisticians make use of auxiliary information to improve estimates. One important example is calibration estimation, which constructs new weights that match benchmark constraints on auxiliary variables while remaining “close” to the design weights. Multiple-frame surveys are increasingly used by statistical agencies and private organizations to reduce sampling costs and/or avoid frame undercoverage errors. Several ways of combining estimates derived from such frames have been proposed elsewhere; in this paper, we extend the calibration paradigm, previously used for single-frame surveys, to calculate the total value of a variable of interest in a dual-frame survey. Calibration is a general tool that allows to include auxiliary information from two frames. It also incorporates, as a special case, certain dual-frame estimators that have been proposed previously. The theoretical properties of our class of estimators are derived and discussed, and simulation studies conducted to compare the efficiency of the procedure, using different sets of auxiliary variables. Finally, the proposed methodology is applied to real data obtained from the Barometer of Culture of Andalusia survey.  相似文献   
283.
This paper proposes a new factor rotation for the context of functional principal components analysis. This rotation seeks to re-express a functional subspace in terms of directions of decreasing smoothness as represented by a generalized smoothing metric. The rotation can be implemented simply and we show on two examples that this rotation can improve the interpretability of the leading components.  相似文献   
284.
Using networks as prior knowledge to guide model selection is a way to reach structured sparsity. In particular, the fused lasso that was originally designed to penalize differences of coefficients corresponding to successive features has been generalized to handle features whose effects are structured according to a given network. As any prior information, the network provided in the penalty may contain misleading edges that connect coefficients whose difference is not zero, and the extent to which the performance of the method depend on the suitability of the graph has never been clearly assessed. In this work we investigate the theoretical and empirical properties of the adaptive generalized fused lasso in the context of generalized linear models. In the fixed \(p\) setting, we show that, asymptotically, adding misleading edges in the graph does not prevent the adaptive generalized fused lasso from enjoying asymptotic oracle properties, while forgetting suitable edges can be more problematic. These theoretical results are complemented by an extensive simulation study that assesses the robustness of the adaptive generalized fused lasso against misspecification of the network as well as its applicability when theoretical coefficients are not exactly equal. Our contribution is also to evaluate the applicability of the generalized fused lasso for the joint modeling of multiple sparse regression functions. Illustrations are provided on two real data examples.  相似文献   
285.
This paper introduces a finite mixture of canonical fundamental skew \(t\) (CFUST) distributions for a model-based approach to clustering where the clusters are asymmetric and possibly long-tailed (in: Lee and McLachlan, arXiv:1401.8182 [statME], 2014b). The family of CFUST distributions includes the restricted multivariate skew \(t\) and unrestricted multivariate skew \(t\) distributions as special cases. In recent years, a few versions of the multivariate skew \(t\) (MST) mixture model have been put forward, together with various EM-type algorithms for parameter estimation. These formulations adopted either a restricted or unrestricted characterization for their MST densities. In this paper, we examine a natural generalization of these developments, employing the CFUST distribution as the parametric family for the component distributions, and point out that the restricted and unrestricted characterizations can be unified under this general formulation. We show that an exact implementation of the EM algorithm can be achieved for the CFUST distribution and mixtures of this distribution, and present some new analytical results for a conditional expectation involved in the E-step.  相似文献   
286.
Estimation of the time-average variance constant (TAVC) of a stationary process plays a fundamental role in statistical inference for the mean of a stochastic process. Wu (2009) proposed an efficient algorithm to recursively compute the TAVC with \(O(1)\) memory and computational complexity. In this paper, we propose two new recursive TAVC estimators that can compute TAVC estimate with \(O(1)\) computational complexity. One of them is uniformly better than Wu’s estimator in terms of asymptotic mean squared error (MSE) at a cost of slightly higher memory complexity. The other preserves the \(O(1)\) memory complexity and is better then Wu’s estimator in most situations. Moreover, the first estimator is nearly optimal in the sense that its asymptotic MSE is \(2^{10/3}3^{-2} \fallingdotseq 1.12\) times that of the optimal off-line TAVC estimator.  相似文献   
287.
Both approximate Bayesian computation (ABC) and composite likelihood methods are useful for Bayesian and frequentist inference, respectively, when the likelihood function is intractable. We propose to use composite likelihood score functions as summary statistics in ABC in order to obtain accurate approximations to the posterior distribution. This is motivated by the use of the score function of the full likelihood, and extended to general unbiased estimating functions in complex models. Moreover, we show that if the composite score is suitably standardised, the resulting ABC procedure is invariant to reparameterisations and automatically adjusts the curvature of the composite likelihood, and of the corresponding posterior distribution. The method is illustrated through examples with simulated data, and an application to modelling of spatial extreme rainfall data is discussed.  相似文献   
288.
In analyzing interval censored data, a non-parametric estimator is often desired due to difficulties in assessing model fits. Because of this, the non-parametric maximum likelihood estimator (NPMLE) is often the default estimator. However, the estimates for values of interest of the survival function, such as the quantiles, have very large standard errors due to the jagged form of the estimator. By forcing the estimator to be constrained to the class of log concave functions, the estimator is ensured to have a smooth survival estimate which has much better operating characteristics than the unconstrained NPMLE, without needing to specify a parametric family or smoothing parameter. In this paper, we first prove that the likelihood can be maximized under a finite set of parameters under mild conditions, although the log likelihood function is not strictly concave. We then present an efficient algorithm for computing a local maximum of the likelihood function. Using our fast new algorithm, we present evidence from simulated current status data suggesting that the rate of convergence of the log-concave estimator is faster (between \(n^{2/5}\) and \(n^{1/2}\)) than the unconstrained NPMLE (between \(n^{1/3}\) and \(n^{1/2}\)).  相似文献   
289.
290.
The accelerated failure time (AFT) models have proved useful in many contexts, though heavy censoring (as for example in cancer survival) and high dimensionality (as for example in microarray data) cause difficulties for model fitting and model selection. We propose new approaches to variable selection for censored data, based on AFT models optimized using regularized weighted least squares. The regularized technique uses a mixture of \(\ell _1\) and \(\ell _2\) norm penalties under two proposed elastic net type approaches. One is the adaptive elastic net and the other is weighted elastic net. The approaches extend the original approaches proposed by Ghosh (Adaptive elastic net: an improvement of elastic net to achieve oracle properties, Technical Reports 2007) and Hong and Zhang (Math Model Nat Phenom 5(3):115–133 2010), respectively. We also extend the two proposed approaches by adding censoring observations as constraints into their model optimization frameworks. The approaches are evaluated on microarray and by simulation. We compare the performance of these approaches with six other variable selection techniques-three are generally used for censored data and the other three are correlation-based greedy methods used for high-dimensional data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号