首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7417篇
  免费   211篇
  国内免费   83篇
管理学   134篇
劳动科学   2篇
民族学   46篇
人才学   1篇
人口学   190篇
丛书文集   742篇
理论方法论   461篇
综合类   3986篇
社会学   896篇
统计学   1253篇
  2024年   13篇
  2023年   65篇
  2022年   69篇
  2021年   104篇
  2020年   149篇
  2019年   190篇
  2018年   180篇
  2017年   228篇
  2016年   177篇
  2015年   257篇
  2014年   830篇
  2013年   1345篇
  2012年   933篇
  2011年   630篇
  2010年   626篇
  2009年   496篇
  2008年   426篇
  2007年   142篇
  2006年   102篇
  2005年   122篇
  2004年   124篇
  2003年   143篇
  2002年   127篇
  2001年   59篇
  2000年   45篇
  1999年   14篇
  1998年   13篇
  1997年   10篇
  1996年   5篇
  1995年   11篇
  1994年   6篇
  1993年   6篇
  1992年   3篇
  1991年   2篇
  1990年   5篇
  1989年   5篇
  1988年   5篇
  1986年   4篇
  1985年   5篇
  1984年   8篇
  1983年   3篇
  1982年   4篇
  1981年   9篇
  1980年   3篇
  1979年   4篇
  1978年   2篇
  1977年   1篇
  1976年   1篇
排序方式: 共有7711条查询结果,搜索用时 0 毫秒
111.
ABSTRACT

The Mellin integral transform is widely used to find the distributions of products and quotients of independent random variables defined over the positive domain. But it is hardly used to derive the distributions defined over both positive and negative values of the random variables. In this paper, the Mellin integral transform is applied to obtain the doubly noncentral t density and its distribution function in convergent series forms.  相似文献   
112.
Abstract

Asymptotic confidence intervals are given for two functions of multinomial outcome probabilities: Gini's diversity measure and Shannon's entropy. “Adjusted” proportions are used in all asymptotic mean and variance formulas, along with a possible logarithmic transformation. Exact confidence coefficients are computed in some cases. Monte Carlo simulation is used in other cases to compare actual coverages to nominal ones. Some recommendations are made.  相似文献   
113.
Abstract

It is common to monitor several correlated quality characteristics using the Hotelling's T 2 statistic. However, T 2 confounds the location shift with scale shift and consequently it is often difficult to determine the factors responsible for out of control signal in terms of the process mean vector and/or process covariance matrix. In this paper, we propose a diagnostic procedure called ‘D-technique’ to detect the nature of shift. For this purpose, two sets of regression equations, each consisting of regression of a variable on the remaining variables, are used to characterize the ‘structure’ of the ‘in control’ process and that of ‘current’ process. To determine the sources responsible for an out of control state, it is shown that it is enough to compare these two structures using the dummy variable multiple regression equation. The proposed method is operationally simpler and computationally advantageous over existing diagnostic tools. The technique is illustrated with various examples.  相似文献   
114.
Abstract

In a recent article Hsueh et al. (Hsueh, H.-M., Liu, J.-P., Chen, J. J. (2001 Hsueh, H.-M., Liu, J.-P. and Chen, J. J. 2001. Unconditional exact tests for equivalence or noninferiority for paired binary endpoints. Biometrics, 57: 478483. [Crossref], [PubMed], [Web of Science ®] [Google Scholar]). Unconditional exact tests for equivalence or noninferiority for paired binary endpoints. Biometrics 57:478–483.) considered unconditional exact tests for paired binary endpoints. They suggested two statistics one of which is based on the restricted maximum-likelihood estimator. Properties of these statistics and the related tests are treated in this article.  相似文献   
115.
Abstract

The efficacy and the asymptotic relative efficiency (ARE) of a weighted sum of Kendall's taus, a weighted sum of Spearman's rhos, a weighted sum of Pearson's r's, and a weighted sum of z-transformation of the Fisher–Yates correlation coefficients, in the presence of a blocking variable, are discussed. The method of selecting the weighting constants that maximize the efficacy of these four correlation coefficients is proposed. The estimate, test statistics and confidence interval of the four correlation coefficients with weights are also developed. To compare the small-sample properties of the four tests, a simulation study is performed. The theoretical and simulated results all prefer the weighted sum of the Pearson correlation coefficients with the optimal weights, as well as the weighted sum of z-transformation of the Fisher–Yates correlation coefficients with the optimal weights.  相似文献   
116.
The authors show how saddlepoint techniques lead to highly accurate approximations for Bayesian predictive densities and cumulative distribution functions in stochastic model settings where the prior is tractable, but not necessarily the likelihood or the predictand distribution. They consider more specifically models involving predictions associated with waiting times for semi‐Markov processes whose distributions are indexed by an unknown parameter θ. Bayesian prediction for such processes when they are not stationary is also addressed and the inverse‐Gaussian based saddlepoint approximation of Wood, Booth & Butler (1993) is shown to accurately deal with the nonstationarity whereas the normal‐based Lugannani & Rice (1980) approximation cannot, Their methods are illustrated by predicting various waiting times associated with M/M/q and M/G/1 queues. They also discuss modifications to the matrix renewal theory needed for computing the moment generating functions that are used in the saddlepoint methods.  相似文献   
117.
The authors develop consistent nonparametric estimation techniques for the directional mixing density. Classical spherical harmonics are used to adapt Euclidean techniques to this directional environment. Minimax rates of convergence are obtained for rotation ally invariant densities verifying various smoothness conditions. It is found that the differences in smoothness between the Laplace, the Gaussian and the von Mises‐Fisher distributions lead to contrasting inferential conclusions.  相似文献   
118.
Ranked set sampling is a sampling approach that leads to improved statistical inference in situations where the units to be sampled can be ranked relative to each other prior to formal measurement. This ranking may be done either by subjective judgment or according to an auxiliary variable, and it need not be completely accurate. In fact, results in the literature have shown that no matter how poor the quality of the ranking, procedures based on ranked set sampling tend to be at least as efficient as procedures based on simple random sampling. However, efforts to quantify the gains in efficiency for ranked set sampling procedures have been hampered by a shortage of available models for imperfect rankings. In this paper, we introduce a new class of models for imperfect rankings, and we provide a rigorous proof that essentially any reasonable model for imperfect rankings is a limit of models in this class. We then describe a specific, easily applied method for selecting an appropriate imperfect rankings model from the class.  相似文献   
119.
We introduce a new goodness-of-fit test which can be applied to hypothesis testing about the marginal distribution of dependent data. We derive a new test for the equivalent hypothesis in the space of wavelet coefficients. Such properties of the wavelet transform as orthogonality, localisation and sparsity make the hypothesis testing in wavelet domain easier than in the domain of distribution functions. We propose to test the null hypothesis separately at each wavelet decomposition level to overcome the problem of bi-dimensionality of wavelet indices and to be able to find the frequency where the empirical distribution function differs from the null in case the null hypothesis is rejected. We suggest a test statistic and state its asymptotic distribution under the null and under some of the alternative hypotheses.  相似文献   
120.
We show how a simple normal approximation to Erlang's delay formula can be used to analyze capacity and staffing problems in service systems that can be modeled as M/M/s queues. The numbers of servers, s, needed in an M/M/s queueing system to assure a probability of delay of, at most, p can be well approximated by sp + z***I-p+, where z1-p, is the (1 - p)th percentile of the standard normal distribution and ρ, the presented load on the system, is the ratio of Λ, the customer arrival rate, to μ, the service rate. We examine the accuracy of this approximation over a set of parameters typical of service operations ranging from police patrol, through telemarketing to automatic teller machines, and we demonstrate that it tends to slightly underestimate the number of servers actually needed to hit the delay probability target—adding one server to the number suggested by the above formula typically gives the exact result. More importantly, the structure of the approximation promotes operational insight by explicitly linking the number of servers with server utilization and the customer service level. Using a scenario based on an actual teleservicing operation, we show how operations managers and designers can quickly obtain insights about the trade-offs between system size, system utilization and customer service. We argue that this little used approach deserves a prominent role in the operations analyst's and operations manager's toolbags.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号