首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4241篇
  免费   128篇
  国内免费   23篇
管理学   184篇
民族学   5篇
人才学   1篇
人口学   18篇
丛书文集   98篇
理论方法论   31篇
综合类   1088篇
社会学   32篇
统计学   2935篇
  2024年   1篇
  2023年   20篇
  2022年   20篇
  2021年   23篇
  2020年   76篇
  2019年   106篇
  2018年   136篇
  2017年   208篇
  2016年   134篇
  2015年   127篇
  2014年   151篇
  2013年   1111篇
  2012年   408篇
  2011年   188篇
  2010年   163篇
  2009年   167篇
  2008年   172篇
  2007年   164篇
  2006年   125篇
  2005年   132篇
  2004年   113篇
  2003年   89篇
  2002年   69篇
  2001年   85篇
  2000年   81篇
  1999年   52篇
  1998年   35篇
  1997年   48篇
  1996年   28篇
  1995年   23篇
  1994年   15篇
  1993年   17篇
  1992年   16篇
  1991年   9篇
  1990年   12篇
  1989年   6篇
  1988年   13篇
  1987年   7篇
  1986年   5篇
  1985年   5篇
  1984年   7篇
  1983年   7篇
  1982年   5篇
  1981年   2篇
  1980年   4篇
  1979年   1篇
  1978年   1篇
  1977年   2篇
  1976年   1篇
  1975年   2篇
排序方式: 共有4392条查询结果,搜索用时 31 毫秒
221.
ABSTRACT

Nonhomogeneous Poisson processes (NHPP) provide many models for hardware and software reliability analysis. In order to get an appropriate NHPP model, goodness-of-Fit (GOF for short) tests have to be carried out. For the power-law processes, lots of GOF tests have been developed. For other NHPP models, only the Conditional Probability Integral Transformation (CPIT) test has been proposed. However, the CPIT test is less powerful and cannot be applied to some NHPP models. This article proposes a general GOF test based on the Laplace statistic for a large class of NHPP models with intensity functions of the form αλ(t, β). The simulation results show that this test is more powerful than CPIT test.  相似文献   
222.
This paper considers a likelihood ratio test for testing hypotheses defined by non-oblique closed convex cones, satisfying the so called iteration projection property, in a set of k normal means. We obtain the critical values of the test using the Chi-Bar-Squared distribution. The obtuse cones are introduced as a particular class of cones which are non-oblique with every one of their faces. Examples with the simple tree order cone and the total order cone are given to illustrate the results.  相似文献   
223.
In many engineering problems it is necessary to draw statistical inferences on the mean of a lognormal distribution based on a complete sample of observations. Statistical demonstration of mean time to repair (MTTR) is one example. Although optimum confidence intervals and hypothesis tests for the lognormal mean have been developed, they are difficult to use, requiring extensive tables and/or a computer. In this paper, simplified conservative methods for calculating confidence intervals or hypothesis tests for the lognormal mean are presented. In this paper, “conservative” refers to confidence intervals (hypothesis tests) whose infimum coverage probability (supremum probability of rejecting the null hypothesis taken over parameter values under the null hypothesis) equals the nominal level. The term “conservative” has obvious implications to confidence intervals (they are “wider” in some sense than their optimum or exact counterparts). Applying the term “conservative” to hypothesis tests should not be confusing if it is remembered that this implies that their equivalent confidence intervals are conservative. No implication of optimality is intended for these conservative procedures. It is emphasized that these are direct statistical inference methods for the lognormal mean, as opposed to the already well-known methods for the parameters of the underlying normal distribution. The method currently employed in MIL-STD-471A for statistical demonstration of MTTR is analyzed and compared to the new method in terms of asymptotic relative efficiency. The new methods are also compared to the optimum methods derived by Land (1971, 1973).  相似文献   
224.
An algorithm is presented for computing the finite population parameters and the approximate probability values associated with a recently-developed class of statistical inference techniques termed multi-response randomized block permutation procedures (MRBP).  相似文献   
225.
针对传统的基于纯硬件平台的FPGA芯片测试方法所存在的种种问题,提出并验证了一种基于软硬件协同技术的FPGA芯片测试方法。该方法引入了软件的灵活性与可观测性等软件技术优势,具有存储深度大、可测I/O管脚数目多、自动完成配置下载(不需人工干预)和自动定位FPGA中的错误等优点,提高了FPGA的测试速度和可靠性,并降低了测试成本,与传统的自动测试仪(ATE)相比有较高的性价比。采用软硬件协同方式针对Xilinx4010的I/O单元进行了测试,实现了对FPGA芯片的自动反复配置、测试和错误定位。  相似文献   
226.
For longitudinal time series data, linear mixed models that contain both random effects across individuals and first-order autoregressive errors within individuals may be appropriate. Some statistical diagnostics based on the models under a proposed elliptical error structure are developed in this work. It is well known that the class of elliptical distributions offers a more flexible framework for modelling since it contains both light- and heavy-tailed distributions. Iterative procedures for the maximum-likelihood estimates of the model parameters are presented. Score tests for the presence of autocorrelation and the homogeneity of autocorrelation coefficients among individuals are constructed. The properties of test statistics are investigated through Monte Carlo simulations. The local influence method for the models is also given. The analysed results of a real data set illustrate the values of the models and diagnostic statistics.  相似文献   
227.
The tabled significance values of the Kolmogorov-Smirnov goodness-of-fit statistic determined for continuous underlying distributions are conservative for applications involving discrete underlying distributions. Conover (1972) proposed an efficient method for computing the exact significance level of the Kolmogorov-Smirnov test for discrete distributions; however, he warned against its use for large sample sizes because “the calculations become too difficult.”

In this work we explore the relationship between sample size and the computational effectiveness of Conover's formulas, where “computational effectiveness” is taken to mean the accuracy attained with a fixed precision of machine arithmetic. The nature of the difficulties in calculations is pointed out. It is indicated that, despite these difficulties, Conover's method of computing the Kolmogorov-Smirnov significance level for discrete distributions can still be a useful tool for a wide range of sample sizes.  相似文献   
228.
Uncertainty and sensitivity analysis is an essential ingredient of model development and applications. For many uncertainty and sensitivity analysis techniques, sensitivity indices are calculated based on a relatively large sample to measure the importance of parameters in their contributions to uncertainties in model outputs. To statistically compare their importance, it is necessary that uncertainty and sensitivity analysis techniques provide standard errors of estimated sensitivity indices. In this paper, a delta method is used to analytically approximate standard errors of estimated sensitivity indices for a popular sensitivity analysis method, the Fourier amplitude sensitivity test (FAST). Standard errors estimated based on the delta method were compared with those estimated based on 20 sample replicates. We found that the delta method can provide a good approximation for the standard errors of both first-order and higher-order sensitivity indices. Finally, based on the standard error approximation, we also proposed a method to determine a minimum sample size to achieve the desired estimation precision for a specified sensitivity index. The standard error estimation method presented in this paper can make the FAST analysis computationally much more efficient for complex models.  相似文献   
229.
There are a number of situations in which the experimental data observed are record statistics. In this paper, optimal confidence intervals as well as uniformly most powerful (MP) tests for one-sided alternatives are developed. Since a uniformly MP test for a two-sided alternative does not exist, generalized likelihood ratio and uniformly unbiased and invariant tests are derived for the two parameters of the exponential distribution based on record data. For illustrative purposes, a data set on the times between consecutive telephone calls to a company's switchboard is analysed using the proposed procedures. Finally, some open problems in this direction are pointed out.  相似文献   
230.
Mixed models are powerful tools for the analysis of clustered data and many extensions of the classical linear mixed model with normally distributed response have been established. As with all parametric (P) models, correctness of the assumed model is critical for the validity of the ensuing inference. An incorrectly specified P means model may be improved by using a local, or nonparametric (NP), model. Two local models are proposed by a pointwise weighting of the marginal and conditional variance–covariance matrices. However, NP models tend to fit to irregularities in the data and may provide fits with high variance. Model robust regression techniques estimate mean response as a convex combination of a P and a NP model fit to the data. It is a semiparametric method by which incomplete or incorrectly specified P models can be improved by adding an appropriate amount of the NP fit. We compare the approximate integrated mean square error of the P, NP, and mixed model robust methods via a simulation study and apply these methods to two real data sets: the monthly wind speed data from countries in Ireland and the engine speed data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号