首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   413篇
  免费   16篇
  国内免费   4篇
管理学   28篇
人口学   3篇
丛书文集   10篇
理论方法论   2篇
综合类   68篇
社会学   10篇
统计学   312篇
  2023年   3篇
  2022年   4篇
  2021年   6篇
  2020年   13篇
  2019年   16篇
  2018年   15篇
  2017年   27篇
  2016年   19篇
  2015年   22篇
  2014年   22篇
  2013年   64篇
  2012年   48篇
  2011年   13篇
  2010年   21篇
  2009年   10篇
  2008年   15篇
  2007年   16篇
  2006年   16篇
  2005年   14篇
  2004年   14篇
  2003年   5篇
  2002年   11篇
  2001年   10篇
  2000年   6篇
  1999年   4篇
  1998年   5篇
  1997年   4篇
  1996年   2篇
  1995年   1篇
  1994年   2篇
  1993年   2篇
  1987年   1篇
  1986年   1篇
  1980年   1篇
排序方式: 共有433条查询结果,搜索用时 15 毫秒
111.
112.
Models with large parameter (i.e., hundreds or thousands of parameters) often behave as if they depend upon only a few parameters, with the rest having comparatively little influence. One challenge of sensitivity analysis with such models is screening parameters to identify the influential ones, and then characterizing their influences.

Large models often require significant computing resources to evaluate their output, and so a good screening mechanism should be efficient: it should minimize the number of times a model must be exercised. This paper describes an efficient procedure to perform sensitivity analysis on deterministic models with specified ranges or probability distributions for each parameter.

It is based on repeated exercising of the model, which can be treated as a black box. Statistical checks can ensure that the screening identified parameters that account for the bulk of the model variation. Subsequent sensitivity analysis can use the screening information to reduce the investment required to characterize the influence of influential and other parameters.

The procedure exploits simplifications in the dependence of a model output on model inputs. It works best where a small number of parameters are much more influential than all the rest. The method is much more sensitive to the number of influential parameters than to the total number of parameters. It is most effective when linear or quadratic effects dominate higher order effects and complex interactions.

The paper presents a set of M athematica functions that can be used to create a variety of types of experimental designs useful for sensitivity analysis, including simple random, latin hypercube and fractional factorial sampling. Each sampling method can use discretization, folding, grouping and replication to create composite designs. These techniques have beencombined in a composite approach called Iterated Fractional Factorial Design (IFFD).

The procedure is applied to model of nuclear fuel waste disposal, and to simplified example models to demonstrate the concepts involved.  相似文献   
113.
Summary.  We consider three sorts of diagnostics for random imputations: displays of the completed data, which are intended to reveal unusual patterns that might suggest problems with the imputations, comparisons of the distributions of observed and imputed data values and checks of the fit of observed data to the model that is used to create the imputations. We formulate these methods in terms of sequential regression multivariate imputation, which is an iterative procedure in which the missing values of each variable are randomly imputed conditionally on all the other variables in the completed data matrix. We also consider a recalibration procedure for sequential regression imputations. We apply these methods to the 2002 environmental sustainability index, which is a linear aggregation of 64 environmental variables on 142 countries.  相似文献   
114.
A regression type estimator of the parameter d in fractionally differenced ARMA (p,q) processes is presented. The proposed estimator is shown to be mean square consistent. Its performance is compared with some of the existing estimators via a simulation study.  相似文献   
115.
The most popular goodness of fit test for a multinomial distribution is the chi-square test. But this test is generally biased if observations are subject to misclassification, In this paper we shall discuss how to define a new test procedure when we have double sample data obtained from the true and fallible devices. An adjusted chi-square test based on the imputation method and the likelihood ratio test are considered, Asymptotically, these two procedures are equivalent. However, an example and simulation results show that the former procedure is not only computationally simpler but also more powerful under finite sample situations.  相似文献   
116.
Analyzing incomplete data for inferring the structure of gene regulatory networks (GRNs) is a challenging task in bioinformatic. Bayesian network can be successfully used in this field. k-nearest neighbor, singular value decomposition (SVD)-based and multiple imputation by chained equations are three fundamental imputation methods to deal with missing values. Path consistency (PC) algorithm based on conditional mutual information (PCA–CMI) is a famous algorithm for inferring GRNs. This algorithm needs the data set to be complete. However, the problem is that PCA–CMI is not a stable algorithm and when applied on permuted gene orders, different networks are obtained. We propose an order independent algorithm, PCA–CMI–OI, for inferring GRNs. After imputation of missing data, the performances of PCA–CMI and PCA–CMI–OI are compared. Results show that networks constructed from data imputed by the SVD-based method and PCA–CMI–OI algorithm outperform other imputation methods and PCA–CMI. An undirected or partially directed network is resulted by PC-based algorithms. Mutual information test (MIT) score, which can deal with discrete data, is one of the famous methods for directing the edges of resulted networks. We also propose a new score, ConMIT, which is appropriate for analyzing continuous data. Results shows that the precision of directing the edges of skeleton is improved by applying the ConMIT score.  相似文献   
117.
Likelihood‐based inference with missing data is challenging because the observed log likelihood is often an (intractable) integration over the missing data distribution, which also depends on the unknown parameter. Approximating the integral by Monte Carlo sampling does not necessarily lead to a valid likelihood over the entire parameter space because the Monte Carlo samples are generated from a distribution with a fixed parameter value. We consider approximating the observed log likelihood based on importance sampling. In the proposed method, the dependency of the integral on the parameter is properly reflected through fractional weights. We discuss constructing a confidence interval using the profile likelihood ratio test. A Newton–Raphson algorithm is employed to find the interval end points. Two limited simulation studies show the advantage of the Wilks inference over the Wald inference in terms of power, parameter space conformity and computational efficiency. A real data example on salamander mating shows that our method also works well with high‐dimensional missing data.  相似文献   
118.
There are many methods for analyzing longitudinal ordinal response data with random dropout. These include maximum likelihood (ML), weighted estimating equations (WEEs), and multiple imputations (MI). In this article, using a Markov model where the effect of previous response on the current response is investigated as an ordinal variable, the likelihood is partitioned to simplify the use of existing software. Simulated data, generated to present a three-period longitudinal study with random dropout, are used to compare performance of ML, WEE, and MI methods in terms of standardized bias and coverage probabilities. These estimation methods are applied to a real medical data set.  相似文献   
119.
We describe the design and analysis for a simulation experiment to compare the mean-squared errors (MSE's) of two quantile estimators defined for random walk designs. The dependence of the easily computed MSE of the first estimator on the levels of five factors is examined via multiple regression. This information is used to plan a simulation to compute the MSE of the second estimator using a fraction of a 3352factorial allowing uncorrelated estimates for all main effects and the two-factor interactions of a specified factor. Efficient estimation of the MSE of the second estimator is attempted through antithetic and control variate techniques of variance reduction, with modest success.  相似文献   
120.
It is now a standard practice to replace missing data in longitudinal surveys with imputed values, but there is still much uncertainty about the best approach to adopt. Using data from a real survey, we compared different strategies combining multiple imputation and the chained equations method, the two main objectives being (1) to explore the impact of the explanatory variables in the chained regression equations and (2) to study the effect of imputation on causality between successive waves of the survey. Results were very stable from one simulation to another, and no systematic bias did appear. The critical points of the method lied in the proper choice of covariates and in the respect of the temporal relation between variables.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号