首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1322篇
  免费   37篇
  国内免费   2篇
管理学   167篇
民族学   14篇
人口学   108篇
丛书文集   34篇
理论方法论   87篇
综合类   416篇
社会学   331篇
统计学   204篇
  2023年   9篇
  2022年   16篇
  2021年   17篇
  2020年   18篇
  2019年   25篇
  2018年   28篇
  2017年   41篇
  2016年   38篇
  2015年   28篇
  2014年   35篇
  2013年   175篇
  2012年   54篇
  2011年   35篇
  2010年   46篇
  2009年   44篇
  2008年   39篇
  2007年   39篇
  2006年   47篇
  2005年   38篇
  2004年   42篇
  2003年   65篇
  2002年   132篇
  2001年   103篇
  2000年   68篇
  1999年   24篇
  1998年   8篇
  1997年   9篇
  1996年   13篇
  1995年   7篇
  1994年   9篇
  1993年   9篇
  1992年   8篇
  1991年   9篇
  1990年   4篇
  1989年   5篇
  1988年   3篇
  1987年   9篇
  1986年   2篇
  1985年   10篇
  1984年   3篇
  1983年   12篇
  1982年   2篇
  1981年   2篇
  1980年   6篇
  1978年   8篇
  1977年   3篇
  1976年   4篇
  1971年   2篇
  1969年   2篇
  1965年   2篇
排序方式: 共有1361条查询结果,搜索用时 15 毫秒
11.
Semiparametric Bayesian classification with longitudinal markers   总被引:1,自引:0,他引:1  
Summary.  We analyse data from a study involving 173 pregnant women. The data are observed values of the β human chorionic gonadotropin hormone measured during the first 80 days of gestational age, including from one up to six longitudinal responses for each woman. The main objective in this study is to predict normal versus abnormal pregnancy outcomes from data that are available at the early stages of pregnancy. We achieve the desired classification with a semiparametric hierarchical model. Specifically, we consider a Dirichlet process mixture prior for the distribution of the random effects in each group. The unknown random-effects distributions are allowed to vary across groups but are made dependent by using a design vector to select different features of a single underlying random probability measure. The resulting model is an extension of the dependent Dirichlet process model, with an additional probability model for group classification. The model is shown to perform better than an alternative model which is based on independent Dirichlet processes for the groups. Relevant posterior distributions are summarized by using Markov chain Monte Carlo methods.  相似文献   
12.
We compare and investigate Neyman's smooth test, its components, and the Kolmogorov-Smirnov (KS) goodness-of-fit test for testing the uniformity of multivariate forecast densities. Simulations indicate that the KS test lacks power when the forecast distributions are misspecified, especially for correlated sequences of random variables. Neyman's smooth test and its components work well in samples of size typically available, although there sometimes are size distortions. The components provide directed diagnosis regarding the kind of departure from the null. For illustration, the tests are applied to forecast densities obtained from a bivariate threshold model fitted to high-frequency financial data.  相似文献   
13.
Nowadays, there is an increasing interest in multi-point models and their applications in Earth sciences. However, users not only ask for multi-point methods able to capture the uncertainties of complex structures and to reproduce the properties of a training image, but also they need quantitative tools for assessing whether a set of realizations have the properties required. Moreover, it is crucial to study the sensitivity of the realizations to the size of the data template and to analyze how fast realization-based statistics converge on average toward training-based statistics. In this paper, some similarity measures and convergence indexes, based on some physically measurable quantities and cumulants of high-order, are presented. In the case study, multi-point simulations of the spatial distribution of coarse-grained limestone and calcareous rock, generated by using three templates of different sizes, are compared and convergence toward training-based statistics is analyzed by taking into account increasing numbers of realizations.  相似文献   
14.
In the present work, we find a set of reliability functionals to fix up an allocation strategy among K(≥2) treatments when the response distributions, conditionally dependent on some continuous prognostic variable, are exponential with unknown linear regression functions as the means of the respective conditional distributions. Targeting such reliability functionals, we propose a covariate-adjusted response-adaptive randomization procedure for the multi-treatment single-period clinical trial under the Koziol–Green model for informative censoring. We compare the proposed procedure with its competitive covariate-eliminated procedure.  相似文献   
15.
Muitivariate failure time data are common in medical research; com¬monly used statistical models for such correlated failure-time data include frailty and marginal models. Both types of models most often assume pro¬portional hazards (Cox, 1972); but the Cox model may not fit the data well This article presents a class of linear transformation frailty models that in¬cludes, as a special case, the proportional hazards model with frailty. We then propose approximate procedures to derive the best linear unbiased es¬timates and predictors of the regression parameters and frailties. We apply the proposed methods to analyze results of a clinical trial of different dose levels of didansine (ddl) among HIV-infected patients who were intolerant of zidovudine (ZDV). These methods yield estimates of treatment effects and of frailties corresponding to patient groups defined by clinical history prior to entry into the trial.  相似文献   
16.
We study a hypothesis testing problem involving the location model suggested by Olkin and Tate (1961). Specifically, we derive a likelihood ratio lest of the associated location hypothesis as an alternative to the conventional method of carrying out separate tests for each of the parameters. A small sample Monte Carlo comparison indicates the general superiority of the former in terms of statistical power. We also comment briefly on the properties of the test.  相似文献   
17.
Abstract

Cluster analysis is the distribution of objects into different groups or more precisely the partitioning of a data set into subsets (clusters) so that the data in subsets share some common trait according to some distance measure. Unlike classification, in clustering one has to first decide the optimum number of clusters and then assign the objects into different clusters. Solution of such problems for a large number of high dimensional data points is quite complicated and most of the existing algorithms will not perform properly. In the present work a new clustering technique applicable to large data set has been used to cluster the spectra of 702248 galaxies and quasars having 1,540 points in wavelength range imposed by the instrument. The proposed technique has successfully discovered five clusters from this 702,248X1,540 data matrix.  相似文献   
18.
Abstract

The notions of (sample) mean, median and mode are common tools for describing the central tendency of a given probability distribution. In this article, we propose a new measure of central tendency, the sample monomode, which is related to the notion of sample mode. We also illustrate the computation of the sample monomode and propose a statistical test for discrete monomodality based on the likelihood ratio statistic.  相似文献   
19.
ABSTRACT

We propose two non parametric portmanteau test statistics for serial dependence in high dimensions using the correlation integral. One test depends on a cutoff threshold value, while the other test is freed of this dependence. Although these tests may each be viewed as variants of the classical Brock, Dechert, and Scheinkman (BDS) test statistic, they avoid some of the major weaknesses of this test. We establish consistency and asymptotic normality of both portmanteau tests. Using Monte Carlo simulations, we investigate the small sample properties of the tests for a variety of data generating processes with normally and uniformly distributed innovations. We show that asymptotic theory provides accurate inference in finite samples and for relatively high dimensions. This is followed by a power comparison with the BDS test, and with several rank-based extensions of the BDS tests that have recently been proposed in the literature. Two real data examples are provided to illustrate the use of the test procedure.  相似文献   
20.
ABSTRACT

When analyzing time-to-event data, there are various situations in which right censoring times for unfailed units are missing. In that context, by taking a supplementary sample of a convenient percentage of unfailed units, we propose a semi-parametric method for estimating a survival function under the natural extension of the Koziol–Green model to double random censoring. Some large sample properties of this estimator are derived. We prove uniform strong consistency and asymptotic weak convergence to a Gaussian process. A simulation study is also presented in order to analyze the behavior of the proposed estimator.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号