首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   368篇
  免费   4篇
  国内免费   3篇
管理学   140篇
民族学   1篇
人口学   3篇
丛书文集   6篇
理论方法论   6篇
综合类   66篇
社会学   10篇
统计学   143篇
  2024年   1篇
  2023年   1篇
  2022年   3篇
  2021年   4篇
  2020年   1篇
  2019年   10篇
  2018年   6篇
  2017年   16篇
  2016年   7篇
  2015年   5篇
  2014年   5篇
  2013年   44篇
  2012年   23篇
  2011年   8篇
  2010年   19篇
  2009年   23篇
  2008年   25篇
  2007年   20篇
  2006年   16篇
  2005年   16篇
  2004年   13篇
  2003年   16篇
  2002年   18篇
  2001年   11篇
  2000年   15篇
  1999年   7篇
  1998年   5篇
  1997年   7篇
  1996年   3篇
  1995年   2篇
  1994年   5篇
  1993年   2篇
  1992年   3篇
  1991年   1篇
  1990年   3篇
  1989年   4篇
  1987年   4篇
  1986年   1篇
  1984年   1篇
  1983年   1篇
排序方式: 共有375条查询结果,搜索用时 15 毫秒
51.
本文引入了Menger-概率赋范空间中有界线性算子,泛函以及向量序列(集)的几种收敛性(有界性)概念,并研究了各种收敛性(有界性)及其相互关系。  相似文献   
52.
Facility location problems have always been studied with theassumption that the edge lengths in the network are static anddo not change over time. The underlying network could be used to model a city street networkfor emergency facility location/hospitals, or an electronic network for locating information centers. In any case, it is clear that due to trafficcongestion the traversal time on links changes with time. Very often, we have estimates as to how the edge lengths change over time, and our objective is to choose a set of locations (vertices) ascenters, such that at every time instant each vertex has a center close to it (clearly, the center close to a vertex may change over time). We also provide approximation algorithms as well as hardness results forthe K-center problem under this model. This is the first comprehensive study regarding approximation algorithmsfor facility location for good time-invariant solutions.  相似文献   
53.
This paper proposes a new nested algorithm (NPL) for the estimation of a class of discrete Markov decision models and studies its statistical and computational properties. Our method is based on a representation of the solution of the dynamic programming problem in the space of conditional choice probabilities. When the NPL algorithm is initialized with consistent nonparametric estimates of conditional choice probabilities, successive iterations return a sequence of estimators of the structural parameters which we call K–stage policy iteration estimators. We show that the sequence includes as extreme cases a Hotz–Miller estimator (for K=1) and Rust's nested fixed point estimator (in the limit when K→∞). Furthermore, the asymptotic distribution of all the estimators in the sequence is the same and equal to that of the maximum likelihood estimator. We illustrate the performance of our method with several examples based on Rust's bus replacement model. Monte Carlo experiments reveal a trade–off between finite sample precision and computational cost in the sequence of policy iteration estimators.  相似文献   
54.
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is an interest in studying latent variables (or latent traits). Usually such latent traits are assumed to be random variables and a convenient distribution is assigned to them. A very common choice for such a distribution has been the standard normal. Recently, Azevedo et al. [Bayesian inference for a skew-normal IRT model under the centred parameterization, Comput. Stat. Data Anal. 55 (2011), pp. 353–365] proposed a skew-normal distribution under the centred parameterization (SNCP) as had been studied in [R.B. Arellano-Valle and A. Azzalini, The centred parametrization for the multivariate skew-normal distribution, J. Multivariate Anal. 99(7) (2008), pp. 1362–1382], to model the latent trait distribution. This approach allows one to represent any asymmetric behaviour concerning the latent trait distribution. Also, they developed a Metropolis–Hastings within the Gibbs sampling (MHWGS) algorithm based on the density of the SNCP. They showed that the algorithm recovers all parameters properly. Their results indicated that, in the presence of asymmetry, the proposed model and the estimation algorithm perform better than the usual model and estimation methods. Our main goal in this paper is to propose another type of MHWGS algorithm based on a stochastic representation (hierarchical structure) of the SNCP studied in [N. Henze, A probabilistic representation of the skew-normal distribution, Scand. J. Statist. 13 (1986), pp. 271–275]. Our algorithm has only one Metropolis–Hastings step, in opposition to the algorithm developed by Azevedo et al., which has two such steps. This not only makes the implementation easier but also reduces the number of proposal densities to be used, which can be a problem in the implementation of MHWGS algorithms, as can be seen in [R.J. Patz and B.W. Junker, A straightforward approach to Markov Chain Monte Carlo methods for item response models, J. Educ. Behav. Stat. 24(2) (1999), pp. 146–178; R.J. Patz and B.W. Junker, The applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses, J. Educ. Behav. Stat. 24(4) (1999), pp. 342–366; A. Gelman, G.O. Roberts, and W.R. Gilks, Efficient Metropolis jumping rules, Bayesian Stat. 5 (1996), pp. 599–607]. Moreover, we consider a modified beta prior (which generalizes the one considered in [3 Azevedo, C. L.N., Bolfarine, H. and Andrade, D. F. 2011. Bayesian inference for a skew-normal IRT model under the centred parameterization. Comput. Stat. Data Anal., 55: 353365. [Crossref], [Web of Science ®] [Google Scholar]]) and a Jeffreys prior for the asymmetry parameter. Furthermore, we study the sensitivity of such priors as well as the use of different kernel densities for this parameter. Finally, we assess the impact of the number of examinees, number of items and the asymmetry level on the parameter recovery. Results of the simulation study indicated that our approach performed equally as well as that in [3 Azevedo, C. L.N., Bolfarine, H. and Andrade, D. F. 2011. Bayesian inference for a skew-normal IRT model under the centred parameterization. Comput. Stat. Data Anal., 55: 353365. [Crossref], [Web of Science ®] [Google Scholar]], in terms of parameter recovery, mainly using the Jeffreys prior. Also, they indicated that the asymmetry level has the highest impact on parameter recovery, even though it is relatively small. A real data analysis is considered jointly with the development of model fitting assessment tools. The results are compared with the ones obtained by Azevedo et al. The results indicate that using the hierarchical approach allows us to implement MCMC algorithms more easily, it facilitates diagnosis of the convergence and also it can be very useful to fit more complex skew IRT models.  相似文献   
55.
56.
Almost optimal solutions for bin coloring problems   总被引:1,自引:1,他引:0  
In this paper we study two interesting bin coloring problems: Minimum Bin Coloring Problem (MinBC) and Online Maximum Bin Coloring Problem (OMaxBC), motivated from several applications in networking. For the MinBC problem, we present two near linear time approximation algorithms to achieve almost optimal solutions, i.e., no more than OPT+2 and OPT+1 respectively, where OPT is the optimal solution. For the OMaxBC problem, we first introduce a deterministic 2-competitive greedy algorithm, and then give lower bounds for any deterministic and randomized (against adaptive offline adversary) online algorithms. The lower bounds show that our deterministic algorithm achieves the best possible competitive ratio. The research of this paper was partially supported by an NSF CAREER award CCF-0546509.  相似文献   
57.
Classical group testing (CGT) is a widely applicable biotechnical technique used to identify a small number of distinguished objects from a population when the presence of any one of these distinguished objects among a group of others produces an observable result. This paper discusses a variant of CGT called group testing for disjoint pairs (GTAP). The difference between the two is that in GTDP, the distinguished items are pairs from, not individual objects in, the population. There are several biological examples of when this abstract model applies. One biological example is DNA hybridization. The presence of pairs of hybridized DNA strands can be detected in a pool of DNA strands. Another situation is the detection of binding interactions between prey and bait proteins. This paper gives a random pooling method, similar in spirit to hypothesis testing, which identifies pairs of objects from a population that collectively have an observable function. This method is simply to apply, achieves good results, is amenable to automation and can be easily modified to compensate for testing errors. M.A. Bishop is supported by AFOSR FA8750-06-C-0007. A.J. Macula is supported by NSF-0436298, AFOSR FA8750-06-C-0007.  相似文献   
58.
Group testing, sometimes called pooling design, has been applied to a variety of problems such as blood testing, multiple access communication, coding theory, among others. Recently, screening experiments in molecular biology has become the most important application. In this paper, we review several models in this application by focusing on decoding, namely, giving a comparative study of how the problem is solved in each of these models.  相似文献   
59.
When a genetic algorithm (GA) is employed in a statistical problem, the result is affected by both variability due to sampling and the stochastic elements of algorithm. Both of these components should be controlled in order to obtain reliable results. In the present work we analyze parametric estimation problems tackled by GAs, and pursue two objectives: the first one is related to a formal variability analysis of final estimates, showing that it can be easily decomposed in the two sources of variability. In the second one we introduce a framework of GA estimation with fixed computational resources, which is a form of statistical and the computational tradeoff question, crucial in recent problems. In this situation the result should be optimal from both the statistical and computational point of view, considering the two sources of variability and the constraints on resources. Simulation studies will be presented for illustrating the proposed method and the statistical and computational tradeoff question.  相似文献   
60.
Abstract

A new variation of the Kendall correlation, the "Probabilistic Support Kendall Correlation" (PSKC), is proposed based on applying the notion of “probabilistic support” to compare the pairwise comparisons of measurements. It is shown that the most basic version of the PSKC is proportional to the standard Kendall correlation under the assumption of no ties; however, the PSKC also lends itself to various extensions involving restrictions to specific sorts of comparisons or consideration of the relative magnitudes of different comparisons (the latter being the PSCC or Probabilistic Support Comparison Correlation as introduced here). It is shown that under broad conditions, Probabilistic Support Kendall Correlation (and hence the standard Kendall correlation as well as the various more general versions of PSKC) has a strong, elegant transitivity property.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号