首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7184篇
  免费   105篇
  国内免费   34篇
管理学   316篇
民族学   26篇
人口学   142篇
丛书文集   228篇
理论方法论   108篇
综合类   1753篇
社会学   71篇
统计学   4679篇
  2024年   5篇
  2023年   28篇
  2022年   63篇
  2021年   58篇
  2020年   112篇
  2019年   203篇
  2018年   230篇
  2017年   455篇
  2016年   148篇
  2015年   169篇
  2014年   246篇
  2013年   2031篇
  2012年   549篇
  2011年   288篇
  2010年   208篇
  2009年   243篇
  2008年   258篇
  2007年   255篇
  2006年   219篇
  2005年   227篇
  2004年   184篇
  2003年   171篇
  2002年   164篇
  2001年   149篇
  2000年   127篇
  1999年   75篇
  1998年   64篇
  1997年   47篇
  1996年   41篇
  1995年   34篇
  1994年   25篇
  1993年   30篇
  1992年   27篇
  1991年   25篇
  1990年   26篇
  1989年   22篇
  1988年   15篇
  1987年   16篇
  1986年   8篇
  1985年   11篇
  1984年   13篇
  1983年   15篇
  1982年   7篇
  1981年   4篇
  1980年   4篇
  1979年   6篇
  1978年   5篇
  1977年   6篇
  1976年   3篇
  1975年   4篇
排序方式: 共有7323条查询结果,搜索用时 14 毫秒
41.
The statistical analysis of patient-reported outcomes (PROs) as endpoints has shown to be of great practical relevance. The resulting scores or indexes from the questionnaires used to measure PROs could be treated as continuous or ordinal. The goal of this study is to propose and evaluate a recoding process of the scores, so that they can be treated as binomial outcomes and, therefore, analyzed using logistic regression with random effects. The general methodology of recoding is based on the observable values of the scores. In order to obtain an optimal recoding, the evaluation of the recoding method is tested for different values of the parameters of the binomial distribution and different probability distributions of the random effects. We illustrate, evaluate and validate the proposed method of recoding with the Short Form-36 (SF-36) Survey and real data. The optimal recoding approach is very useful and flexible. Moreover, it has a natural interpretation, not only for ordinal scores, but also for questionnaires with many dimensions and different profiles, where a common method of analysis is desired, such as the SF-36.  相似文献   
42.
A substantial degree of uncertainty exists surrounding the reconstruction of events based on memory recall. This form of measurement error affects the performance of structured interviews such as the Composite International Diagnostic Interview (CIDI), an important tool to assess mental health in the community. Measurement error probably explains the discrepancy in estimates between longitudinal studies with repeated assessments (the gold-standard), yielding approximately constant rates of depression, versus cross-sectional studies which often find increasing rates closer in time to the interview. Repeated assessments of current status (or recent history) are more reliable than reconstruction of a person's psychiatric history based on a single interview. In this paper, we demonstrate a method of estimating a time-varying measurement error distribution in the age of onset of an initial depressive episode, as diagnosed by the CIDI, based on an assumption regarding age-specific incidence rates. High-dimensional non-parametric estimation is achieved by the EM-algorithm with smoothing. The method is applied to data from a Norwegian mental health survey in 2000. The measurement error distribution changes dramatically from 1980 to 2000, with increasing variance and greater bias further away in time from the interview. Some influence of the measurement error on already published results is found.  相似文献   
43.
In this paper, we shall develop a novel family of bimodal univariate distributions (also allowing for unimodal shapes) and demonstrate its use utilizing the well-known and almost classical data set involving durations and waiting times of eruptions of the Old-Faithful geyser in Yellowstone park. Specifically, we shall analyze the Old-Faithful data set with 272 data points provided in Dekking et al. [3]. In the process, we develop a bivariate distribution using a copula technique and compare its fit to a mixture of bivariate normal distributions also fitted to the same bivariate data set. We believe the fit-analysis and comparison is primarily illustrative from an educational perspective for distribution theory modelers, since in the process a variety of statistical techniques are demonstrated. We do not claim one model as preferred over the other.  相似文献   
44.
The most popular approach in extreme value statistics is the modelling of threshold exceedances using the asymptotically motivated generalised Pareto distribution. This approach involves the selection of a high threshold above which the model fits the data well. Sometimes, few observations of a measurement process might be recorded in applications and so selecting a high quantile of the sample as the threshold leads to almost no exceedances. In this paper we propose extensions of the generalised Pareto distribution that incorporate an additional shape parameter while keeping the tail behaviour unaffected. The inclusion of this parameter offers additional structure for the main body of the distribution, improves the stability of the modified scale, tail index and return level estimates to threshold choice and allows a lower threshold to be selected. We illustrate the benefits of the proposed models with a simulation study and two case studies.  相似文献   
45.
46.
The boxplot is an effective data-visualization tool useful in diverse applications and disciplines. Although more sophisticated graphical methods exist, the boxplot remains relevant due to its simplicity, interpretability, and usefulness, even in the age of big data. This article highlights the origins and developments of the boxplot that is now widely viewed as an industry standard as well as its inherent limitations when dealing with data from skewed distributions, particularly when detecting outliers. The proposed Ratio-Skewed boxplot is shown to be practical and suitable for outlier labeling across several parametric distributions.  相似文献   
47.
Troutt (1991,1993) proposed the idea of the vertical density representation (VDR) based on Box-Millar method. Kotz, Fang and Liang (1997) provided a systematic study on the multivariate vertical density representation (MVDR). Suppose that we want to generate a random vector X[d]Rnthat has a density function ?(x). The key point of using the MVDR is to generate the uniform distribution on [D]?(v) = {x :?(x) = v} for any v > 0 which is the surface in RnIn this paper we use the conditional distribution method to generate the uniform distribution on a domain or on some surface and based on it we proposed an alternative version of the MVDR(type 2 MVDR), by which one can transfer the problem of generating a random vector X with given density f to one of generating (X, Xn+i) that follows the uniform distribution on a region in Rn+1defined by ?. Several examples indicate that the proposed method is quite practical.  相似文献   
48.
In the model of progressive type II censoring, point and interval estimation as well as relations for single and product moments are considered. Based on two-parameter exponential distributions, maximum likelihood estimators (MLEs), uniformly minimum variance unbiased estimators (UMVUEs) and best linear unbiased estimators (BLUEs) are derived for both location and scale parameters. Some properties of these estimators are shown. Moreover, results for single and product moments of progressive type II censored order statistics are presented to obtain recurrence relations from exponential and truncated exponential distributions. These relations may then be used to compute all the means, variances and covariances of progressive type II censored order statistics based on exponential distributions for arbitrary censoring schemes. The presented recurrence relations simplify those given by Aggarwala and Balakrishnan (1996)  相似文献   
49.
A. Ferreira  ?  L. de Haan  L. Peng? 《Statistics》2013,47(5):401-434
One of the major aims of one-dimensional extreme-value theory is to estimate quantiles outside the sample or at the boundary of the sample. The underlying idea of any method to do this is to estimate a quantile well inside the sample but near the boundary and then to shift it somehow to the right place. The choice of this “anchor quantile” plays a major role in the accuracy of the method. We present a bootstrap method to achieve the optimal choice of sample fraction in the estimation of either high quantile or endpoint estimation which extends earlier results by Hall and Weissman (1997) in the case of high quantile estimation. We give detailed results for the estimators used by Dekkers et al. (1989). An alternative way of attacking problems like this one is given in a paper by Drees and Kaufmann (1998).  相似文献   
50.
The partial attributable risk (PAR) has been introduced as a tool for partitioning the responsibility for causing an adverse event between various risk factors. It has arisen from epidemiology, but it is also a valid general risk allocation concept, which can, for example, be applied to data from customer satisfaction surveys. So far, a variance formula for the PAR has been missing so that the confidence intervals were not directly available. This paper provides the asymptotic normal distribution for the PAR determined from a cross-sectional study.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号