首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1069篇
  免费   19篇
  国内免费   15篇
管理学   113篇
民族学   7篇
人口学   15篇
丛书文集   52篇
理论方法论   31篇
综合类   302篇
社会学   114篇
统计学   469篇
  2024年   1篇
  2023年   9篇
  2022年   11篇
  2021年   27篇
  2020年   23篇
  2019年   24篇
  2018年   43篇
  2017年   48篇
  2016年   30篇
  2015年   42篇
  2014年   46篇
  2013年   228篇
  2012年   66篇
  2011年   66篇
  2010年   32篇
  2009年   66篇
  2008年   49篇
  2007年   36篇
  2006年   61篇
  2005年   36篇
  2004年   28篇
  2003年   28篇
  2002年   21篇
  2001年   16篇
  2000年   12篇
  1999年   6篇
  1998年   9篇
  1997年   4篇
  1996年   1篇
  1995年   1篇
  1994年   2篇
  1993年   4篇
  1992年   5篇
  1991年   2篇
  1990年   4篇
  1989年   1篇
  1988年   4篇
  1987年   2篇
  1985年   2篇
  1983年   2篇
  1982年   2篇
  1981年   1篇
  1979年   2篇
排序方式: 共有1103条查询结果,搜索用时 0 毫秒
71.
72.
Based on the recursive formulas of Lee (1988) and Singh and Relyea (1992) for computing the noncentral F distribution, a numerical algorithm for evaluating the distributional values of the sample squared multiple correlation coefficient is proposed. The distributional function of this statistic is usually represented as an infinite weighted sum of the iterative form of incomplete beta integral. So an effective algorithm for the incomplete beta integral is crucial to the numerical evaluation of various distribution values. Let a and b denote two shape parameters shown in the incomplete beta integral and hence formed in the sampling distribution functionn be the sample size, and p be the number of random variates. Then both 2a = p - 1 and 2b = n - p are positive integers in sampling situations so that the proposed numerical procedures in this paper are greatly simplified by recursively formulating the incomplete beta integral. By doing this, it can jointly compute the distributional values of probability dens function (pdf) and cumulative distribution function (cdf) for which the distributional value of quantile can be more efficiently obtained by Newton's method. In addition, computer codes in C are developed for demonstration and performance evaluation. For the less precision required, the implemented method can achieve the exact value with respect to the jnite significant digit desired. In general, the numerical results are apparently better than those by various approximations and interpolations of Gurland and Asiribo (1991),Gurland and Milton (1970), and Lee (1971, 1972). When b = (1/2)(n -p) is an integer in particular, the finite series formulation of Gurland (1968) is used to evaluate the pdf/cdf values without truncation errors, which are served as the pivotal one. By setting the implemented codes with double precisions, the infinite series form of derived method can achieve the pivotal values for almost all cases under study. Related comparisons and illustrations are also presented  相似文献   
73.
This paper presents the results of a comprehensive empirical analysis of the screening measure of multiple recursive generators (MRGs) of orders one and two. Two kinds of screening measures are distinguished: spectral test and lattice test. With regard to these screening measures, two exhaustive searches of the twenty best MRGs of orders one and two are conducted. Some empirical comparisons reveal that the screening procedure with maximum spectral value criterion is preferred in terms of efficiency and thus, is a good way of obtaining ideal MRGs of higher orders. Several extensively tested second-order MRGs are also presented and are therefore recommended.  相似文献   
74.
This article deals with the construction of an X? control chart using the Bayesian perspective. We obtain new control limits for the X? chart for exponentially distributed data-generating processes through the sequential use of Bayes’ theorem and credible intervals. Construction of the control chart is illustrated using a simulated data example. The performance of the proposed, standard, tolerance interval, exponential cumulative sum (CUSUM) and exponential exponentially weighted moving average (EWMA) control limits are examined and compared via a Monte Carlo simulation study. The proposed Bayesian control limits are found to perform better than standard, tolerance interval, exponential EWMA and exponential CUSUM control limits for exponentially distributed processes.  相似文献   
75.
Summary.  The use of a fixed rejection region for multiple hypothesis testing has been shown to outperform standard fixed error rate approaches when applied to control of the false discovery rate. In this work it is demonstrated that, if the original step-up procedure of Benjamini and Hochberg is modified to exercise adaptive control of the false discovery rate, its performance is virtually identical to that of the fixed rejection region approach. In addition, the dependence of both methods on the proportion of true null hypotheses is explored, with a focus on the difficulties that are involved in the estimation of this quantity.  相似文献   
76.
Summary.  Given a large number of test statistics, a small proportion of which represent departures from the relevant null hypothesis, a simple rule is given for choosing those statistics that are indicative of departure. It is based on fitting by moments a mixture model to the set of test statistics and then deriving an estimated likelihood ratio. Simulation suggests that the procedure has good properties when the departure from an overall null hypothesis is not too small.  相似文献   
77.
In some exceptional circumstances, as in very rare diseases, nonrandomized one‐arm trials are the sole source of evidence to demonstrate efficacy and safety of a new treatment. The design of such studies needs a sound methodological approach in order to provide reliable information, and the determination of the appropriate sample size still represents a critical step of this planning process. As, to our knowledge, no method exists for sample size calculation in one‐arm trials with a recurrent event endpoint, we propose here a closed sample size formula. It is derived assuming a mixed Poisson process, and it is based on the asymptotic distribution of the one‐sample robust nonparametric test recently developed for the analysis of recurrent events data. The validity of this formula in managing a situation with heterogeneity of event rates, both in time and between patients, and time‐varying treatment effect was demonstrated with exhaustive simulation studies. Moreover, although the method requires the specification of a process for events generation, it seems to be robust under erroneous definition of this process, provided that the number of events at the end of the study is similar to the one assumed in the planning phase. The motivating clinical context is represented by a nonrandomized one‐arm study on gene therapy in a very rare immunodeficiency in children (ADA‐SCID), where a major endpoint is the recurrence of severe infections. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
78.
ABSTRACT

There is no established procedure for testing for trend with nominal outcomes that would provide both a global hypothesis test and outcome-specific inference. We derive a simple formula for such a test using a weighted sum of Cochran–Armitage test statistics evaluating the trend in each outcome separately. The test is shown to be equivalent to the score test for multinomial logistic regression, however, the new formulation enables the derivation of a sample size formula and multiplicity-adjusted inference for individual outcomes. The proposed methods are implemented in the R package multiCA.  相似文献   
79.
We consider the semiparametric proportional hazards model for the cause-specific hazard function in analysis of competing risks data with missing cause of failure. The inverse probability weighted equation and augmented inverse probability weighted equation are proposed for estimating the regression parameters in the model, and their theoretical properties are established for inference. Simulation studies demonstrate that the augmented inverse probability weighted estimator is doubly robust and the proposed method is appropriate for practical use. The simulations also compare the proposed estimators with the multiple imputation estimator of Lu and Tsiatis (2001). The application of the proposed method is illustrated using data from a bone marrow transplant study.  相似文献   
80.
We investigate the properties of several statistical tests for comparing treatment groups with respect to multivariate survival data, based on the marginal analysis approach introduced by Wei, Lin and Weissfeld [Regression Analysis of multivariate incomplete failure time data by modelling marginal distributians, JASA vol. 84 pp. 1065–1073]. We consider two types of directional tests, based on a constrained maximization and on linear combinations of the unconstrained maximizer of the working likelihood function, and the omnibus test arising from the same working likelihood. The directional tests are members of a larger class of tests, from which an asymptotically optimal test can be found. We compare the asymptotic powers of the tests under general contiguous alternatives for a variety of settings, and also consider the choice of the number of survival times to include in the multivariate outcome. We illustrate the results with simulations and with the results from a clinical trial examining recurring opportunistic infections in persons with HIV.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号