首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1250篇
  免费   27篇
  国内免费   3篇
管理学   71篇
民族学   2篇
人才学   1篇
人口学   10篇
丛书文集   26篇
理论方法论   15篇
综合类   326篇
社会学   88篇
统计学   741篇
  2023年   12篇
  2022年   6篇
  2021年   2篇
  2020年   25篇
  2019年   45篇
  2018年   39篇
  2017年   57篇
  2016年   30篇
  2015年   24篇
  2014年   46篇
  2013年   343篇
  2012年   93篇
  2011年   42篇
  2010年   35篇
  2009年   54篇
  2008年   52篇
  2007年   60篇
  2006年   48篇
  2005年   45篇
  2004年   34篇
  2003年   37篇
  2002年   38篇
  2001年   18篇
  2000年   12篇
  1999年   11篇
  1998年   13篇
  1997年   8篇
  1996年   9篇
  1995年   5篇
  1994年   2篇
  1993年   8篇
  1992年   4篇
  1991年   1篇
  1990年   1篇
  1989年   2篇
  1988年   2篇
  1987年   3篇
  1986年   1篇
  1985年   3篇
  1984年   4篇
  1983年   1篇
  1982年   2篇
  1978年   1篇
  1977年   1篇
  1976年   1篇
排序方式: 共有1280条查询结果,搜索用时 765 毫秒
801.
In this paper, we studied the MINimum-d-Disjunct Submatrix (MIN-d-DS), which can be used to select the minimum number of non-unique probes for viruses identification. We prove that MIN-d-DS is NP-hard for any fixed d. Using d-disjunct matrix, we present an O(log k)-approximation algorithm where k is an upper bound on the maximum number of targets hybridized to a probe. We also present a (1+(d+1)log n)-approximation algorithm to identify at most d targets in the presence of experimental errors. Our approximation algorithms also yield a linear time complexity for the decoding algorithms. The research of T. Znati was supported in part by National Science Foundation under grant CCF-0548895.  相似文献   
802.
Abstract

When estimating a proportion p by group testing, and it is desired to increase precision, it is sometimes impractical to obtain additional individuals but it is possible to retest groups formed from the individuals within the groups that test positive at the first stage. Hepworth and Watson assessed four methods of retesting, and recommended a random regrouping of individuals from the first stage. They developed an estimator of p for their proposed method, and, because of the analytic complexity, used simulation to examine its variance properties. We now provide an analytical solution to the variance of the estimator, and compare its performance with the earlier simulated results. We show that our solution gives an acceptable approximation in a reasonable range of circumstances.  相似文献   
803.
Abstract. This paper proposes, implements and investigates a new non‐parametric two‐sample test for detecting stochastic dominance. We pose the question of detecting the stochastic dominance in a non‐standard way. This is motivated by existing evidence showing that standard formulations and pertaining procedures may lead to serious errors in inference. The procedure that we introduce matches testing and model selection. More precisely, we reparametrize the testing problem in terms of Fourier coefficients of well‐known comparison densities. Next, the estimated Fourier coefficients are used to form a kind of signed smooth rank statistic. In such a setting, the number of Fourier coefficients incorporated into the statistic is a smoothing parameter. We determine this parameter via some flexible selection rule. We establish the asymptotic properties of the new test under null and alternative hypotheses. The finite sample performance of the new solution is demonstrated through Monte Carlo studies and an application to a set of survival times.  相似文献   
804.
The least product relative error (LPRE) estimator and test statistic to test linear hypotheses of regression parameters in the multiplicative regression model are studied when the number of covariate variables increases with the sample size. Some properties of the LPRE estimator and test statistic are obtained such as consistency, Bahadur presentation, and asymptotic distributions. Furthermore, we extend the LPRE to a more general relative error criterion and provide their statistical properties. Numerical studies including simulations and two real examples show that the proposed estimation performs well.  相似文献   
805.
It is illustrated in this paper that hypothesis testing procedures can be derived based on the penalized likelihood approach. Based on this point of view, many traditional hypothesis tests, including the two-sample mean test, score test, and Hotelling’s T2 test are revisited under the penalized likelihood framework. Similar framework is also applicable to the empirical likelihood.  相似文献   
806.
The “What If” analysis is applicablein research and heuristic situations that utilize statistical significance testing. One utility for the “What If” is in a pedagogical perspective; the “What If” analysis provides professors an interactive tool that visually represents examples of what statistical significance testing entails and the variables that affect the commonly misinterpreted pCALCULATED value. In order to develop a strong understanding of what affects the pCALCULATED value, the students tangibly manipulate data within the Excel sheet to create a visualize representation that explicitly demonstrates how variables affect the pCALCULATED value. The second utility is primarily applicable to researchers. “What If” analysis contributes to research in two ways: (1) a “What If” analysis can be run a priori to estimate the sample size a researcher may wish to use for his study; and (2) a “What If” analysis can be run a posteriori to aid in the interpretation of results. If used, the “What If” analysis provides researchers with another utility that enables them to conduct high-quality research and disseminate their results in an accurate manner.  相似文献   
807.
A fundamental theorem in hypothesis testing is the Neyman‐Pearson (N‐P) lemma, which creates the most powerful test of simple hypotheses. In this article, we establish Bayesian framework of hypothesis testing, and extend the Neyman‐Pearson lemma to create the Bayesian most powerful test of general hypotheses, thus providing optimality theory to determine thresholds of Bayes factors. Unlike conventional Bayes tests, the proposed Bayesian test is able to control the type I error.  相似文献   
808.
We study the finite-sample performance of test statistics in linear regression models where the error dependence is of unknown form. With an unknown dependence structure, there is traditionally a trade-off between the maximum lag over which the correlation is estimated (the bandwidth) and the amount of heterogeneity in the process. When allowing for heterogeneity, through conditional heteroskedasticity, the correlation at far lags is generally omitted and the resultant inflation of the empirical size of test statistics has long been recognized. To allow for correlation at far lags, we study the test statistics constructed under the possibly misspecified assumption of conditional homoskedasticity. To improve the accuracy of the test statistics, we employ the second-order asymptotic refinement in Rothenberg [Approximate power functions for some robust tests of regression coefficients, Econometrica 56 (1988), pp. 997–1019] to determine the critical values. The simulation results of this paper suggest that when sample sizes are small, modelling the heterogeneity of a process is secondary to accounting for dependence. We find that a conditionally homoskedastic covariance matrix estimator (when used in conjunction with Rothenberg's second-order critical value adjustment) improves test size with only a minimal loss in test power, even when the data manifest significant amounts of heteroskedasticity. In some specifications, the size inflation was cut by nearly 40% over the traditional heteroskedasticity and autocorrelation consistent (HAC) test. Finally, we note that the proposed test statistics do not require that the researcher specify the bandwidth or the kernel.  相似文献   
809.
In this paper, we first present two characterizations of the exponential distribution and next introduce three exact goodness-of-fit test for exponentiality. By simulation, the powers of the proposed tests under various alternatives are compared with the existing tests.  相似文献   
810.
《Statistics》2012,46(6):1187-1209
ABSTRACT

According to the general law of likelihood, the strength of statistical evidence for a hypothesis as opposed to its alternative is the ratio of their likelihoods, each maximized over the parameter of interest. Consider the problem of assessing the weight of evidence for each of several hypotheses. Under a realistic model with a free parameter for each alternative hypothesis, this leads to weighing evidence without any shrinkage toward a presumption of the truth of each null hypothesis. That lack of shrinkage can lead to many false positives in settings with large numbers of hypotheses. A related problem is that point hypotheses cannot have more support than their alternatives. Both problems may be solved by fusing the realistic model with a model of a more restricted parameter space for use with the general law of likelihood. Applying the proposed framework of model fusion to data sets from genomics and education yields intuitively reasonable weights of evidence.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号