首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1544篇
  免费   32篇
  国内免费   2篇
管理学   69篇
人口学   2篇
丛书文集   21篇
理论方法论   8篇
综合类   161篇
社会学   20篇
统计学   1297篇
  2023年   12篇
  2022年   15篇
  2021年   12篇
  2020年   20篇
  2019年   52篇
  2018年   60篇
  2017年   107篇
  2016年   49篇
  2015年   27篇
  2014年   44篇
  2013年   340篇
  2012年   107篇
  2011年   60篇
  2010年   47篇
  2009年   59篇
  2008年   53篇
  2007年   55篇
  2006年   50篇
  2005年   55篇
  2004年   52篇
  2003年   41篇
  2002年   34篇
  2001年   31篇
  2000年   29篇
  1999年   20篇
  1998年   22篇
  1997年   22篇
  1996年   9篇
  1995年   13篇
  1994年   7篇
  1993年   6篇
  1992年   9篇
  1991年   11篇
  1990年   4篇
  1989年   1篇
  1988年   5篇
  1987年   6篇
  1986年   3篇
  1985年   3篇
  1984年   5篇
  1983年   8篇
  1982年   6篇
  1981年   2篇
  1980年   2篇
  1979年   1篇
  1978年   2篇
排序方式: 共有1578条查询结果,搜索用时 15 毫秒
21.
To model an hypothesis of double monotone dependence between two ordinal categorical variables A and B usually a set of symmetric odds ratios defined on the joint probability function is subject to linear inequality constraints. Conversely in this paper two sets of asymmetric odds ratios defined, respectively, on the conditional distributions of A given B and on the conditional distributions of B given A are subject to linear inequality constraints. If the joint probabilities are parameterized by a saturated log-linear model, these constraints are nonlinear inequality constraints on the log-linear parameters. The problem here considered is a non-standard one both for the presence of nonlinear inequality constraints and for the fact that the number of these constraints is greater than the number of the parameters of the saturated log-linear model.This work has been supported by the COFIN 2002 project, references 2002133957_002, 2002133957_004. Preliminary findings have been presented at SIS (Società Italiana di Statistica) Annual Meeting, Bari, 2004.  相似文献   
22.
Consider a randomized trial in which time to the occurrence of a particular disease, say pneumocystis pneumonia in an AIDS trial or breast cancer in a mammographic screening trial, is the failure time of primary interest. Suppose that time to disease is subject to informative censoring by the minimum of time to death, loss to and end of follow-up. In such a trial, the censoring time is observed for all study subjects, including failures. In the presence of informative censoring, it is not possible to consistently estimate the effect of treatment on time to disease without imposing additional non-identifiable assumptions. The goals of this paper are to specify two non-identifiable assumptions that allow one to test for and estimate an effect of treatment on time to disease in the presence of informative censoring. In a companion paper (Robins, 1995), we provide consistent and reasonably efficient semiparametric estimators for the treatment effect under these assumptions. In this paper we largely restrict attention to testing. We propose tests that, like standard weighted-log-rank tests, are asymptotically distribution-free -level tests under the null hypothesis of no causal effect of treatment on time to disease whenever the censoring and failure distributions are conditionally independent given treatment arm. However, our tests remain asymptotically distribution-free -level tests in the presence of informative censoring provided either of our assumptions are true. In contrast, a weighted log-rank test will be an -level test in the presence of informative censoring only if (1) one of our two non-identifiable assumptions hold, and (2) the distribution of time to censoring is the same in the two treatment arms. We also extend our methods to studies of the effect of a treatment on the evolution over time of the mean of a repeated measures outcome, such as CD-4 count.  相似文献   
23.
Assignment of individuals to correct species or population of origin based on a comparison of allele profiles has in recent years become more accurate due to improvements in DNA marker technology. A method of assessing the error in such assignment problems is présentés. The method is based on the exact hypergeometric distributions of contingency tables conditioned on marginal totals. The result is a confidence region of fixed confidence level. This confidence level is calculable exactly in principle, and estimable very accurately by simulation, without knowledge of the true population allele frequencies. Various properties of these techniques are examined through application to several examples of actual DNA marker data and through simulation studies. Methods which may reduce computation time are discussed and illustrated.  相似文献   
24.
Kontkanen  P.  Myllymäki  P.  Silander  T.  Tirri  H.  Grünwald  P. 《Statistics and Computing》2000,10(1):39-54
In this paper we are interested in discrete prediction problems for a decision-theoretic setting, where the task is to compute the predictive distribution for a finite set of possible alternatives. This question is first addressed in a general Bayesian framework, where we consider a set of probability distributions defined by some parametric model class. Given a prior distribution on the model parameters and a set of sample data, one possible approach for determining a predictive distribution is to fix the parameters to the instantiation with the maximum a posteriori probability. A more accurate predictive distribution can be obtained by computing the evidence (marginal likelihood), i.e., the integral over all the individual parameter instantiations. As an alternative to these two approaches, we demonstrate how to use Rissanen's new definition of stochastic complexity for determining predictive distributions, and show how the evidence predictive distribution with Jeffrey's prior approaches the new stochastic complexity predictive distribution in the limit with increasing amount of sample data. To compare the alternative approaches in practice, each of the predictive distributions discussed is instantiated in the Bayesian network model family case. In particular, to determine Jeffrey's prior for this model family, we show how to compute the (expected) Fisher information matrix for a fixed but arbitrary Bayesian network structure. In the empirical part of the paper the predictive distributions are compared by using the simple tree-structured Naive Bayes model, which is used in the experiments for computational reasons. The experimentation with several public domain classification datasets suggest that the evidence approach produces the most accurate predictions in the log-score sense. The evidence-based methods are also quite robust in the sense that they predict surprisingly well even when only a small fraction of the full training set is used.  相似文献   
25.
M-quantile models with application to poverty mapping   总被引:1,自引:0,他引:1  
Over the last decade there has been growing demand for estimates of population characteristics at small area level. Unfortunately, cost constraints in the design of sample surveys lead to small sample sizes within these areas and as a result direct estimation, using only the survey data, is inappropriate since it yields estimates with unacceptable levels of precision. Small area models are designed to tackle the small sample size problem. The most popular class of models for small area estimation is random effects models that include random area effects to account for between area variations. However, such models also depend on strong distributional assumptions, require a formal specification of the random part of the model and do not easily allow for outlier robust inference. An alternative approach to small area estimation that is based on the use of M-quantile models was recently proposed by Chambers and Tzavidis (Biometrika 93(2):255–268, 2006) and Tzavidis and Chambers (Robust prediction of small area means and distributions. Working paper, 2007). Unlike traditional random effects models, M-quantile models do not depend on strong distributional assumption and automatically provide outlier robust inference. In this paper we illustrate for the first time how M-quantile models can be practically employed for deriving small area estimates of poverty and inequality. The methodology we propose improves the traditional poverty mapping methods in the following ways: (a) it enables the estimation of the distribution function of the study variable within the small area of interest both under an M-quantile and a random effects model, (b) it provides analytical, instead of empirical, estimation of the mean squared error of the M-quantile small area mean estimates and (c) it employs a robust to outliers estimation method. The methodology is applied to data from the 2002 Living Standards Measurement Survey (LSMS) in Albania for estimating (a) district level estimates of the incidence of poverty in Albania, (b) district level inequality measures and (c) the distribution function of household per-capita consumption expenditure in each district. Small area estimates of poverty and inequality show that the poorest Albanian districts are in the mountainous regions (north and north east) with the wealthiest districts, which are also linked with high levels of inequality, in the coastal (south west) and southern part of country. We discuss the practical advantages of our methodology and note the consistency of our results with results from previous studies. We further demonstrate the usefulness of the M-quantile estimation framework through design-based simulations based on two realistic survey data sets containing small area information and show that the M-quantile approach may be preferable when the aim is to estimate the small area distribution function.  相似文献   
26.
英语专业新生在项目学习中角色转变的实证研究   总被引:1,自引:0,他引:1  
项目学习是一种以学生为主体的教学模式,在英语专业新生中实施项目学习,首先面临的挑战便是在项目学习中的角色转变。依据教育心理学的行为矫正理论、项目学习理论和当代评价理论,构建了项目学习中的行为干预模式,旨在帮助新生从独立与竞争型的学习者转变为自主与合作型的学习者。运用该行为干预模式对英语专业新生进行实证研究的结果表明:项目学习中的行为干预模式有助于促进学生在项目学习中的角色转变,有助于发挥学生学习的合作性与自主性。  相似文献   
27.
28.
In the context of an objective Bayesian approach to the multinomial model, Dirichlet(a, …, a) priors with a < 1 have previously been shown to be inadequate in the presence of zero counts, suggesting that the uniform prior (a = 1) is the preferred candidate. In the presence of many zero counts, however, this prior may not be satisfactory either. A model selection approach is proposed, allowing for the possibility of zero parameters corresponding to zero count categories. This approach results in a posterior mixture of Dirichlet distributions and marginal mixtures of beta distributions, which seem to avoid the problems that potentially result from the various proposed Dirichlet priors, in particular in the context of extreme data with zero counts.  相似文献   
29.
30.
Abstract

In the fields of internet financial transactions and reliability engineering, there could be more zero and one observations simultaneously. In this paper, considering that it is beyond the range where the conventional model can fit, zero-and-one-inflated geometric distribution regression model is proposed. Ingeniously introducing Pólya-Gamma latent variables in the Bayesian inference, posterior sampling with high-dimensional parameters is converted to latent variables sampling and posterior sampling with lower-dimensional parameters, respectively. Circumventing the need for Metropolis-Hastings sampling, the sample with higher sampling efficiency is obtained. A simulation study is conducted to assess the performance of the proposed estimation for various sample sizes. Finally, a doctoral dissertation data set is analyzed to illustrate the practicability of the proposed method, research shows that zero-and-one-inflated geometric distribution regression model using Pólya-Gamma latent variables can achieve better fitting results.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号