首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5245篇
  免费   162篇
  国内免费   78篇
管理学   1609篇
民族学   2篇
人口学   18篇
丛书文集   119篇
理论方法论   44篇
综合类   1454篇
社会学   63篇
统计学   2176篇
  2024年   13篇
  2023年   33篇
  2022年   97篇
  2021年   109篇
  2020年   133篇
  2019年   173篇
  2018年   180篇
  2017年   255篇
  2016年   189篇
  2015年   176篇
  2014年   250篇
  2013年   866篇
  2012年   463篇
  2011年   266篇
  2010年   193篇
  2009年   241篇
  2008年   268篇
  2007年   256篇
  2006年   226篇
  2005年   229篇
  2004年   192篇
  2003年   123篇
  2002年   93篇
  2001年   93篇
  2000年   62篇
  1999年   59篇
  1998年   54篇
  1997年   29篇
  1996年   29篇
  1995年   23篇
  1994年   23篇
  1993年   12篇
  1992年   18篇
  1991年   10篇
  1990年   4篇
  1989年   6篇
  1988年   8篇
  1987年   1篇
  1986年   4篇
  1985年   4篇
  1984年   7篇
  1983年   2篇
  1982年   5篇
  1981年   3篇
  1979年   1篇
  1978年   2篇
  1977年   1篇
  1975年   1篇
排序方式: 共有5485条查询结果,搜索用时 734 毫秒
11.
While much used in practice, latent variable models raise challenging estimation problems due to the intractability of their likelihood. Monte Carlo maximum likelihood (MCML), as proposed by Geyer & Thompson (1992 ), is a simulation-based approach to maximum likelihood approximation applicable to general latent variable models. MCML can be described as an importance sampling method in which the likelihood ratio is approximated by Monte Carlo averages of importance ratios simulated from the complete data model corresponding to an arbitrary value of the unknown parameter. This paper studies the asymptotic (in the number of observations) performance of the MCML method in the case of latent variable models with independent observations. This is in contrast with previous works on the same topic which only considered conditional convergence to the maximum likelihood estimator, for a fixed set of observations. A first important result is that when is fixed, the MCML method can only be consistent if the number of simulations grows exponentially fast with the number of observations. If on the other hand, is obtained from a consistent sequence of estimates of the unknown parameter, then the requirements on the number of simulations are shown to be much weaker.  相似文献   
12.
Generalized method of moments (GMM) is used to develop tests for discriminating discrete distributions among the two-parameter family of Katz distributions. Relationships involving moments are exploited to obtain identifying and over-identifying restrictions. The asymptotic relative efficiencies of tests based on GMM are analyzed using the local power approach and the approximate Bahadur efficiency. The paper also gives results of Monte Carlo experiments designed to check the validity of the theoretical findings and to shed light on the small sample properties of the proposed tests. Extensions of the results to compound Poisson alternative hypotheses are discussed.  相似文献   
13.
Bayesian analysis of discrete time warranty data   总被引:1,自引:0,他引:1  
Summary.  The analysis of warranty claim data, and their use for prediction, has been a topic of active research in recent years. Field data comprising numbers of units returned under guarantee are examined, covering both situations in which the ages of the failed units are known and in which they are not. The latter case poses particular computational problems for likelihood-based methods because of the large number of feasible failure patterns that must be included as contributions to the likelihood function. For prediction of future warranty exposure, which is of central concern to the manufacturer, the Bayesian approach is adopted. For this, Markov chain Monte Carlo methodology is developed.  相似文献   
14.
Summary.  Non-ignorable missing data, a serious problem in both clinical trials and observational studies, can lead to biased inferences. Quality-of-life measures have become increasingly popular in clinical trials. However, these measures are often incompletely observed, and investigators may suspect that missing quality-of-life data are likely to be non-ignorable. Although several recent references have addressed missing covariates in survival analysis, they all required the assumption that missingness is at random or that all covariates are discrete. We present a method for estimating the parameters in the Cox proportional hazards model when missing covariates may be non-ignorable and continuous or discrete. Our method is useful in reducing the bias and improving efficiency in the presence of missing data. The methodology clearly specifies assumptions about the missing data mechanism and, through sensitivity analysis, helps investigators to understand the potential effect of missing data on study results.  相似文献   
15.
A test of congruence among distance matrices is described. It tests the hypothesis that several matrices, containing different types of variables about the same objects, are congruent with one another, so they can be used jointly in statistical analysis. Raw data tables are turned into similarity or distance matrices prior to testing; they can then be compared to data that naturally come in the form of distance matrices. The proposed test can be seen as a generalization of the Mantel test of matrix correspondence to any number of distance matrices. This paper shows that the new test has the correct rate of Type I error and good power. Power increases as the number of objects and the number of congruent data matrices increase; power is higher when the total number of matrices in the study is smaller. To illustrate the method, the proposed test is used to test the hypothesis that matrices representing different types of organoleptic variables (colour, nose, body, palate and finish) in single‐malt Scotch whiskies are congruent.  相似文献   
16.
Abstract.  We correct two proofs concerning Markov properties for graphs representing marginal independence relations.  相似文献   
17.
根据现有文献研究成果,结合实际调研分析,将供应链风险因素整理归纳为系统风险、供应风险、物流风险、信息风险、财务风险、管理风险、需求风险和环境风险。每种风险因素并非独立存在,而是相互影响、互为根源。其非线性叠加结果将会放大供应链整体风险水平。因此,必须认识到其存在的风险,只有在明确各种风险产生根源的基础上,才能恰当地制定、选择和采取有效措施,规避风险,确保供应链整体绩效稳步提高。  相似文献   
18.
This paper argues that Fisher's paradox can be explained away in terms of estimator choice. We analyse by means of Monte Carlo experiments the small sample properties of a large set of estimators (including virtually all available single-equation estimators), and compute the critical values based on the empirical distributions of the t-statistics, for a variety of Data Generation Processes (DGPs), allowing for structural breaks, ARCH effects etc. We show that precisely the estimators most commonly used in the literature, namely OLS, Dynamic OLS (DOLS) and non-prewhitened FMLS, have the worst performance in small samples, and produce rejections of the Fisher hypothesis. If one employs the estimators with the most desirable properties (i.e., the smallest downward bias and the minimum shift in the distribution of the associated t-statistics), or if one uses the empirical critical values, the evidence based on US data is strongly supportive of the Fisher relation, consistently with many theoretical models.  相似文献   
19.
Diagnostics for dependence within time series extremes   总被引:1,自引:0,他引:1  
Summary. The analysis of extreme values within a stationary time series entails various assumptions concerning its long- and short-range dependence. We present a range of new diagnostic tools for assessing whether these assumptions are appropriate and for identifying structure within extreme events. These tools are based on tail characteristics of joint survivor functions but can be implemented by using existing estimation methods for extremes of univariate independent and identically distributed variables. Our diagnostic aids are illustrated through theoretical examples, simulation studies and by application to rainfall and exchange rate data. On the basis of these diagnostics we can explain characteristics that are found in the observed extreme events of these series and also gain insight into the properties of events that are more extreme than those observed.  相似文献   
20.
Summary.  We discuss the inversion of the gas profiles (ozone, NO3, NO2, aerosols and neutral density) in the upper atmosphere from the spectral occultation measurements. The data are produced by the 'Global ozone monitoring of occultation of stars' instrument on board the Envisat satellite that was launched in March 2002. The instrument measures the attenuation of light spectra at various horizontal paths from about 100 km down to 10–20 km. The new feature is that these data allow the inversion of the gas concentration height profiles. A short introduction is given to the present operational data management procedure with examples of the first real data inversion. Several solution options for a more comprehensive statistical inversion are presented. A direct inversion leads to a non-linear model with hundreds of parameters to be estimated. The problem is solved with an adaptive single-step Markov chain Monte Carlo algorithm. Another approach is to divide the problem into several non-linear smaller dimensional problems, to run parallel adaptive Markov chain Monte Carlo chains for them and to solve the gas profiles in repetitive linear steps. The effect of grid size is discussed, and we present how the prior regularization takes the grid size into account in a way that effectively leads to a grid-independent inversion.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号