首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2615篇
  免费   78篇
  国内免费   46篇
管理学   494篇
民族学   1篇
人口学   2篇
丛书文集   42篇
理论方法论   4篇
综合类   761篇
社会学   12篇
统计学   1423篇
  2024年   7篇
  2023年   15篇
  2022年   42篇
  2021年   26篇
  2020年   38篇
  2019年   79篇
  2018年   96篇
  2017年   168篇
  2016年   88篇
  2015年   81篇
  2014年   97篇
  2013年   369篇
  2012年   184篇
  2011年   120篇
  2010年   104篇
  2009年   94篇
  2008年   126篇
  2007年   141篇
  2006年   125篇
  2005年   109篇
  2004年   106篇
  2003年   82篇
  2002年   57篇
  2001年   56篇
  2000年   66篇
  1999年   54篇
  1998年   42篇
  1997年   28篇
  1996年   32篇
  1995年   25篇
  1994年   17篇
  1993年   8篇
  1992年   20篇
  1991年   7篇
  1990年   6篇
  1989年   10篇
  1988年   6篇
  1987年   3篇
  1985年   2篇
  1984年   1篇
  1981年   2篇
排序方式: 共有2739条查询结果,搜索用时 218 毫秒
971.
Based on progressive Type-I hybrid censored data, statistical analysis in constant-stress accelerated life test (CS-ALT) for generalized exponential (GE) distribution is discussed. The maximum likelihood estimates (MLEs) of the parameters and the reliability function are obtained with EM algorithm, as well as the observed Fisher information matrix, the asymptotic variance-covariance matrix of the MLEs, and the asymptotic unbiased estimate (AUE) of the scale parameter. Confidence intervals (CIs) for the parameters are derived using asymptotic normality of MLEs and percentile bootstrap (Boot-p) method. Finally, the point estimates and interval estimates of the parameters are compared separately through the Monte-Carlo method.  相似文献   
972.
Abstract

Many engineering systems have multiple components with more than one degradation measure which is dependent on each other due to their complex failure mechanisms, which results in some insurmountable difficulties for reliability work in engineering. To overcome these difficulties, the system reliability prediction approaches based on performance degradation theory develop rapidly in recent years, and show their superiority over the traditional approaches in many applications. This paper proposes reliability models of systems with two dependent degrading components. It is assumed that the degradation paths of the components are governed by gamma processes. For a parallel system, its failure probability function can be approximated by the bivariate Birnbaum–Saunders distribution. According to the relationship of parallel and series systems, it is easy to find that the failure probability function of a series system can be expressed by the bivariate Birnbaum–Saunders distribution and its marginal distributions. The model in such a situation is very complicated and analytically intractable, and becomes cumbersome from a computational viewpoint. For this reason, the Bayesian Markov chain Monte Carlo method is developed for this problem that allows the maximum likelihood estimates of the parameters to be determined in an efficient manner. After that, the confidence intervals of the failure probability of systems are given. For an illustration of the proposed model, a numerical example about railway track is presented.  相似文献   
973.
ABSTRACT

In this article, we introduce the Gompertz power series (GPS) class of distributions which is obtained by compounding Gompertz and power series distributions. This distribution contains several lifetime models such as Gompertz-geometric (GG), Gompertz-Poisson (GP), Gompertz-binomial (GB), and Gompertz-logarithmic (GL) distributions as special cases. Sub-models of the GPS distribution are studied in details. The hazard rate function of the GPS distribution can be increasing, decreasing, and bathtub-shaped. We obtain several properties of the GPS distribution such as its probability density function, and failure rate function, Shannon entropy, mean residual life function, quantiles, and moments. The maximum likelihood estimation procedure via a EM-algorithm is presented, and simulation studies are performed for evaluation of this estimation for complete data, and the MLE of parameters for censored data. At the end, a real example is given.  相似文献   
974.
This paper addresses the problem of identifying groups that satisfy the specific conditions for the means of feature variables. In this study, we refer to the identified groups as “target clusters” (TCs). To identify TCs, we propose a method based on the normal mixture model (NMM) restricted by a linear combination of means. We provide an expectation–maximization (EM) algorithm to fit the restricted NMM by using the maximum-likelihood method. The convergence property of the EM algorithm and a reasonable set of initial estimates are presented. We demonstrate the method's usefulness and validity through a simulation study and two well-known data sets. The proposed method provides several types of useful clusters, which would be difficult to achieve with conventional clustering or exploratory data analysis methods based on the ordinary NMM. A simple comparison with another target clustering approach shows that the proposed method is promising in the identification.  相似文献   
975.
Exponential and Weibull models are commonly used models with former being the special case of the latter. In their most general forms, the exponential model involves both threshold and scale parameters whereas the Weibull model involves threshold, scale and shape parameters. The article analyzes the two models in a Bayesian framework and examines the feasibility of generality versus particularity in the sense that it tests for the possibility of (not) having a threshold and/or a shape parameter in the data arising from exponential (Weibull) model. The results are illustrated based on both complete and censored datasets from the models.  相似文献   
976.
In this paper, we propose a model based on a class of symmetric distributions, which avoids the transformation of data, stabilizes the variance of the observations, and provides robust estimation of parameters and high flexibility for modeling different types of data. Probabilistic and statistical aspects of this new model are developed throughout the article, which include mathematical properties, estimation of parameters and inference. The obtained results are illustrated by means of real genomic data.  相似文献   
977.
Modelling of HIV dynamics in AIDS research has greatly improved our understanding of the pathogenesis of HIV-1 infection and guided for the treatment of AIDS patients and evaluation of antiretroviral therapies. Some of the model parameters may have practical meanings with prior knowledge available, but others might not have prior knowledge. Incorporating priors can improve the statistical inference. Although there have been extensive Bayesian and frequentist estimation methods for the viral dynamic models, little work has been done on making simultaneous inference about the Bayesian and frequentist parameters. In this article, we propose a hybrid Bayesian inference approach for viral dynamic nonlinear mixed-effects models using the Bayesian frequentist hybrid theory developed in Yuan [Bayesian frequentist hybrid inference, Ann. Statist. 37 (2009), pp. 2458–2501]. Compared with frequentist inference in a real example and two simulation examples, the hybrid Bayesian approach is able to improve the inference accuracy without compromising the computational load.  相似文献   
978.
The Monte Carlo method gives some estimators to evaluate the expectation [ILM0001] based on samples from either the true density f or from some instrumental density. In this paper, we show that the Riemann estimators introduced by Philippe (1997) can be improved by using the importance sampling method. This approach produces a class of Monte Carlo estimators such that the variance is of order O(n ?2). The choice of an optimal estimator among this class is discussed. Some simulations illustrate the improvement brought by this method. Moreover, we give a criterion to assess the convergence of our optimal estimator to the integral of interest.  相似文献   
979.
Given the very large amount of data obtained everyday through population surveys, much of the new research again could use this information instead of collecting new samples. Unfortunately, relevant data are often disseminated into different files obtained through different sampling designs. Data fusion is a set of methods used to combine information from different sources into a single dataset. In this article, we are interested in a specific problem: the fusion of two data files, one of which being quite small. We propose a model-based procedure combining a logistic regression with an Expectation-Maximization algorithm. Results show that despite the lack of data, this procedure can perform better than standard matching procedures.  相似文献   
980.
Let X= (X1,…, Xk)’ be a k-variate (k ≥ 2) normal random vector with unknown population mean vector μ = (μ1 ,…, μk)’ and covariance matrix Σ of order k and let μ[1] ≤ … ≤ μ[k] be the ordered values of the μ ’ s. No prior knowledge of the pairing of the μ[i] with the Xj. (or μ[i] with the σj 2) is assumed for any i and j (1 ≤ i, j ≤ k). Based on a random sample of N independent vector observations on X, this paper considers both upper and lower (one-sided) and two-sided 100γ% (0 < γ < 1) confidence intervals for μ[k] and μ[1], the largest and the smallest mean, respectively, when Σ is known and when Σ is equal to σ2R with common unknown variance σ2 > 0 and correlation matrix R known, respectively. An optimum two-sided confidence interval via finding the shortest length from this class is also considered. Necessary tables and computer program to actually apply these procedures are provided.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号