首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1279篇
  免费   35篇
  国内免费   11篇
管理学   82篇
民族学   1篇
人口学   6篇
丛书文集   7篇
理论方法论   7篇
综合类   283篇
社会学   2篇
统计学   937篇
  2023年   6篇
  2022年   5篇
  2021年   7篇
  2020年   26篇
  2019年   38篇
  2018年   47篇
  2017年   63篇
  2016年   45篇
  2015年   42篇
  2014年   35篇
  2013年   308篇
  2012年   100篇
  2011年   40篇
  2010年   31篇
  2009年   52篇
  2008年   42篇
  2007年   39篇
  2006年   32篇
  2005年   44篇
  2004年   37篇
  2003年   31篇
  2002年   32篇
  2001年   26篇
  2000年   24篇
  1999年   22篇
  1998年   16篇
  1997年   30篇
  1996年   14篇
  1995年   16篇
  1994年   13篇
  1993年   9篇
  1992年   7篇
  1991年   9篇
  1990年   5篇
  1989年   3篇
  1988年   8篇
  1987年   6篇
  1986年   6篇
  1985年   3篇
  1984年   1篇
  1983年   3篇
  1978年   1篇
  1977年   1篇
排序方式: 共有1325条查询结果,搜索用时 328 毫秒
81.
The standard tensile test is one of the most frequent tools performed for the evaluation of mechanical properties of metals. An empirical model proposed by Ramberg and Osgood fits the tensile test data using a nonlinear model for the strain in terms of the stress. It is an Error-In-Variables (EIV) model because of the uncertainty affecting both strain and stress measurement instruments. The SIMEX, a simulation-based method for the estimation of model parameters, is powerful in order to reduce bias due to the measurement error in EIV models. The plan of this article is the following. In Sec. 2, we introduce the Ramberg–Osgood model and another reparametrization according to different assumptions on the independent variable. In Sec. 3, there is a summary of SIMEX method for the case at hand. Section 4 is a comparison between SIMEX and others estimating methods in order to highlight the peculiarities of the different approaches. In the last section, there are some concluding remarks.  相似文献   
82.
A simple random sample on a random variable A allows its density to be consistently estimated, by a histogram or preferably a kernel density estimate. When the sampling is biased towards certain x-values these methods instead estimate a weighted version of the density function. This article proposes a method for estimating both the density and the sampling bias simultaneously. The technique requires two independent samples and utilises ideas from mark-recapture experiments. An estimator of the size of the sampled population also follows simply from this density estimate.  相似文献   
83.
Homoscedastic and heteroscedastic Gaussian mixtures differ in the constraints placed on the covariance matrices of the mixture components. A new mixture, called herein a strophoscedastic mixture, is defined by a new constraint, This constraint requires the matrices to be identical under orthogonal trans¬formations, where different transformations are allowed for different matrices. It is shown that the M-step of the EM method for estimating the parameters of strophoscedastic mixtures from sample data is explicitly solvable using singular value decompositions. Consequently, the EM-based maximum likelihood estimation algorithm is as easily implemented for strophoscedastic mixtures as it is for homoscedastic and heteroscedastic mixtures. An example of a “noisy” Archimedian spiral is presented.  相似文献   
84.
This paper extends Lindley's measure of average information to the linear model, E(Y∣ß) = Xß. An expression which quantifies the average amount of information provided by the nxl vector of observations Y about the pxl vector of coefficient parameters ß will be derived. The effect of the structure of the regressor matrix, X, on the information measure is discussed. An information theoretic optimal design is characterized. Some applications are suggested.  相似文献   
85.
In teaching the development of uniformly most powerful unbiased (UMPU) tests, one rarely discusses the performance of alternative biased tests. It is shown, through the comparison of two independent Bernoulli proportions, that a biased test (the Z test) can be more powerful than the UMPU test (Fisher's exact test—randomized) in a large region of the alternative parameter space. A more general example is also given.  相似文献   
86.
87.
In this article, we develop a specification technique for building multiplicative time-varying GARCH models of Amado and Teräsvirta (2008, 2013). The variance is decomposed into an unconditional and a conditional component such that the unconditional variance component is allowed to evolve smoothly over time. This nonstationary component is defined as a linear combination of logistic transition functions with time as the transition variable. The appropriate number of transition functions is determined by a sequence of specification tests. For that purpose, a coherent modelling strategy based on statistical inference is presented. It is heavily dependent on Lagrange multiplier type misspecification tests. The tests are easily implemented as they are entirely based on auxiliary regressions. Finite-sample properties of the strategy and tests are examined by simulation. The modelling strategy is illustrated in practice with two real examples: an empirical application to daily exchange rate returns and another one to daily coffee futures returns.  相似文献   
88.
This article deals with the issue of using a suitable pseudo-likelihood, instead of an integrated likelihood, when performing Bayesian inference about a scalar parameter of interest in the presence of nuisance parameters. The proposed approach has the advantages of avoiding the elicitation on the nuisance parameters and the computation of multidimensional integrals. Moreover, it is particularly useful when it is difficult, or even impractical, to write the full likelihood function.

We focus on Bayesian inference about a scalar regression coefficient in various regression models. First, in the context of non-normal regression-scale models, we give a theroetical result showing that there is no loss of information about the parameter of interest when using a posterior distribution derived from a pseudo-likelihood instead of the correct posterior distribution. Second, we present non trivial applications with high-dimensional, or even infinite-dimensional, nuisance parameters in the context of nonlinear normal heteroscedastic regression models, and of models for binary outcomes and count data, accounting also for possibile overdispersion. In all these situtations, we show that non Bayesian methods for eliminating nuisance parameters can be usefully incorporated into a one-parameter Bayesian analysis.  相似文献   
89.
Optimal designs for estimating the parameters and also the optimum factor combinations in multiresponse experiments have been considered by various authors. However, till date, in mixture experiments optimum designs have been studied only in the single response case. In this article, attempt has been made to investigate optimum designs for estimating optimum mixing proportions in a multiresponse mixture experiment.  相似文献   
90.
In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location‐scale families (including the log‐normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号