首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1670篇
  免费   34篇
  国内免费   1篇
管理学   37篇
民族学   3篇
人口学   15篇
丛书文集   29篇
理论方法论   14篇
综合类   258篇
社会学   41篇
统计学   1308篇
  2023年   7篇
  2022年   8篇
  2021年   9篇
  2020年   20篇
  2019年   49篇
  2018年   43篇
  2017年   79篇
  2016年   40篇
  2015年   40篇
  2014年   54篇
  2013年   537篇
  2012年   131篇
  2011年   52篇
  2010年   54篇
  2009年   59篇
  2008年   82篇
  2007年   53篇
  2006年   42篇
  2005年   44篇
  2004年   51篇
  2003年   33篇
  2002年   35篇
  2001年   30篇
  2000年   14篇
  1999年   24篇
  1998年   15篇
  1997年   17篇
  1996年   5篇
  1995年   9篇
  1994年   6篇
  1993年   3篇
  1992年   6篇
  1991年   3篇
  1990年   4篇
  1989年   5篇
  1988年   3篇
  1987年   1篇
  1985年   4篇
  1984年   11篇
  1983年   9篇
  1982年   5篇
  1980年   1篇
  1979年   1篇
  1978年   3篇
  1976年   2篇
  1975年   2篇
排序方式: 共有1705条查询结果,搜索用时 15 毫秒
91.
The hazard rate (HR) and mean residual lifetime are two of the most practical and best-known functions in biometry, reliability, statistics and life testing. Recently, the reversed HR function is found to have interesting properties useful in additional areas such as censored data and forensic science. For these three biometric functions, we propose testing methods that they take on a known functional form against that they dominate or are dominated by this known form. This goodness-of-fit-type testing is wider in applications and more interesting than the long-standing testing procedures for exponentiality against the monotonicity of these functions or even the change point problems. This is so since we can test against any choice of the survival distribution and not just exponentiality. For this general testing, we present easy to implement tests and generalize them into classes of statistics that could lead to more powerful and efficient testing.  相似文献   
92.
The estimation of incremental cost–effectiveness ratio (ICER) has received increasing attention recently. It is expressed in terms of the ratio of the change in costs of a therapeutic intervention to the change in the effects of the intervention. Despite the intuitive interpretation of ICER as an additional cost per additional benefit unit, it is a challenge to estimate the distribution of a ratio of two stochastically dependent distributions. A vast literature regarding the statistical methods of ICER has developed in the past two decades, but none of these methods provide an unbiased estimator. Here, to obtain the unbiased estimator of the cost–effectiveness ratio (CER), the zero intercept of the bivariate normal regression is assumed. In equal sample sizes, the Iman–Conover algorithm is applied to construct the desired variance–covariance matrix of two random bivariate samples, and the estimation then follows the same approach as CER to obtain the unbiased estimator of ICER. The bootstrapping method with the Iman–Conover algorithm is employed for unequal sample sizes. Simulation experiments are conducted to evaluate the proposed method. The regression-type estimator performs overwhelmingly better than the sample mean estimator in terms of mean squared error in all cases.  相似文献   
93.
In reliability theory, risk analysis, renewal processes and actuarial studies, the residual lifetimes data play an important essential role in studying the conditional tail of the lifetime data. In this paper, based on some observed ordered residual Weibull data, we introduce different prediction methods for obtaining prediction intervals (PIs) of future residual lifetimes including likelihood, Wald, moments, parametric bootstrap, and highest conditional methods. Monte Carlo simulations are performed to compare the performances of the so obtained PIs and one data analysis is performed for illustration purposes.  相似文献   
94.
An assumption made in the classification problem is that the distribution of the data being classified has the same parameters as the data used to obtain the discriminant functions. A method based on mixtures of two normal distributions is proposed as method of checking this assumption and modifying the discriminant functions accordingly. As a first step, the case considered in this paper, is that of a shift in the mean of one or two univariate normal distributions with all other parameters remaining fixed and known. Calculations based on the asymptotic the proposed method works well even for small shifts.  相似文献   
95.
Kleinbaum (1973) developed a generalized growth curve model for analyzing incomplete longitudinal data. In this paper the small sample properties of several related test statistics are investigated via Monte Carlo techniques. The covariance matrix is estimated by each of three non-iterative methods. The null and non-null distributions of these test statistics are examined.  相似文献   
96.
Clinical phase II trials in oncology are conducted to determine whether the activity of a new anticancer treatment is promising enough to merit further investigation. Two‐stage designs are commonly used for this situation to allow for early termination. Designs proposed in the literature so far have the common drawback that the sample sizes for the two stages have to be specified in the protocol and have to be adhered to strictly during the course of the trial. As a consequence, designs that allow a higher extent of flexibility are desirable. In this article, we propose a new adaptive method that allows an arbitrary modification of the sample size of the second stage using the results of the interim analysis or external information while controlling the type I error rate. If the sample size is not changed during the trial, the proposed design shows very similar characteristics to the optimal two‐stage design proposed by Chang et al. (Biometrics 1987; 43:865–874). However, the new design allows the use of mid‐course information for the planning of the second stage, thus meeting practical requirements when performing clinical phase II trials in oncology. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
97.
Very often, the likelihoods for circular data sets are of quite complicated forms, and the functional forms of the normalising constants, which depend upon the unknown parameters, are unknown. This latter problem generally precludes rigorous, exact inference (both classical and Bayesian) for circular data.Noting the paucity of literature on Bayesian circular data analysis, and also because realistic data analysis is naturally permitted by the Bayesian paradigm, we address the above problem taking a Bayesian perspective. In particular, we propose a methodology that combines importance sampling and Markov chain Monte Carlo (MCMC) in a very effective manner to sample from the posterior distribution of the parameters, given the circular data. With simulation study and real data analysis, we demonstrate the considerable reliability and flexibility of our proposed methodology in analysing circular data.  相似文献   
98.
Non-coding deoxyribonucleic acid (DNA) can typically be modelled by a sequence of Bernoulli random variables by coding one base, e.g. T, as 1 and other bases as 0. If a segment of a sequence is functionally important, the probability of a 1 will be different in this changed segment from that in the surrounding DNA. It is important to be able to see whether such a segment occurs in a particular DNA sequence and to pin-point it so that a molecular biologist can investigate its possible function. Here we discuss methods for testing the occurrence of such a changed segment and how to estimate the end points of it. Maximum-likelihood-based methods are not very tractable and so a nonparametric method based on the approach of Pettitt has been developed. The problem and its solution are illustrated by a specific DNA example.  相似文献   
99.
Model checking with discrete data regressions can be difficult because the usual methods such as residual plots have complicated reference distributions that depend on the parameters in the model. Posterior predictive checks have been proposed as a Bayesian way to average the results of goodness-of-fit tests in the presence of uncertainty in estimation of the parameters. We try this approach using a variety of discrepancy variables for generalized linear models fitted to a historical data set on behavioural learning. We then discuss the general applicability of our findings in the context of a recent applied example on which we have worked. We find that the following discrepancy variables work well, in the sense of being easy to interpret and sensitive to important model failures: structured displays of the entire data set, general discrepancy variables based on plots of binned or smoothed residuals versus predictors and specific discrepancy variables created on the basis of the particular concerns arising in an application. Plots of binned residuals are especially easy to use because their predictive distributions under the model are sufficiently simple that model checks can often be made implicitly. The following discrepancy variables did not work well: scatterplots of latent residuals defined from an underlying continuous model and quantile–quantile plots of these residuals.  相似文献   
100.
Quantitative trait loci (QTL) mapping is a growing field in statistical genetics. In plants, QTL detection experiments often feature replicates or clones within a specific genetic line. In this work, a Bayesian hierarchical regression model is applied to simulated QTL data and to a dataset from the Arabidopsis thaliana plants for locating the QTL mapping associated with cotyledon opening. A conditional model search strategy based on Bayesian model averaging is utilized to reduce the computational burden.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号