首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12069篇
  免费   192篇
  国内免费   1篇
管理学   1482篇
民族学   56篇
人才学   3篇
人口学   1111篇
丛书文集   61篇
理论方法论   1033篇
综合类   103篇
社会学   5676篇
统计学   2737篇
  2023年   58篇
  2022年   67篇
  2021年   66篇
  2020年   177篇
  2019年   221篇
  2018年   326篇
  2017年   436篇
  2016年   281篇
  2015年   237篇
  2014年   291篇
  2013年   2216篇
  2012年   419篇
  2011年   319篇
  2010年   290篇
  2009年   213篇
  2008年   256篇
  2007年   257篇
  2006年   266篇
  2005年   247篇
  2004年   202篇
  2003年   211篇
  2002年   218篇
  2001年   321篇
  2000年   292篇
  1999年   271篇
  1998年   204篇
  1997年   171篇
  1996年   220篇
  1995年   190篇
  1994年   199篇
  1993年   171篇
  1992年   201篇
  1991年   214篇
  1990年   197篇
  1989年   171篇
  1988年   193篇
  1987年   179篇
  1986年   141篇
  1985年   177篇
  1984年   182篇
  1983年   158篇
  1982年   122篇
  1981年   98篇
  1980年   97篇
  1979年   121篇
  1978年   93篇
  1977年   82篇
  1975年   63篇
  1974年   67篇
  1973年   60篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
The local maximum likelihood estimate θ^ t of a parameter in a statistical model f ( x , θ) is defined by maximizing a weighted version of the likelihood function which gives more weight to observations in the neighbourhood of t . The paper studies the sense in which f ( t , θ^ t ) is closer to the true distribution g ( t ) than the usual estimate f ( t , θ^) is. Asymptotic results are presented for the case in which the model misspecification becomes vanishingly small as the sample size tends to ∞. In this setting, the relative entropy risk of the local method is better than that of maximum likelihood. The form of optimum weights for the local likelihood is obtained and illustrated for the normal distribution.  相似文献   
992.
Sequential order statistics is an extension of ordinary order statistics. They model the successive failure times in sequential k-out-of-n systems, where the failures of components possibly affect the residual lifetimes of the remaining ones. In this paper, we consider the residual lifetime of the components after the kth failure in the sequential (nk + 1)-out-of-n system. We extend some results on the joint distribution of the residual lifetimes of the remaining components in an ordinary (nk + 1)-out-of-n system presented in Bairamov and Arnold (Stat Probab Lett 78(8):945–952, 2008) to the case of the sequential (nk + 1)-out-of-n system.  相似文献   
993.
Differences between plant varieties are based on phenotypic observations, which are both space and time consuming. Moreover, the phenotypic data result from the combined effects of genotype and environment. On the contrary, molecular data are easier to obtain and give a direct access to the genotype. In order to save experimental trials and to concentrate efforts on the relevant comparisons between varieties, the relationship between phenotypic and genetic distances is studied. It appears that the classical genetic distances based on molecular data are not appropriate for predicting phenotypic distances. In the linear model framework, we define a new pseudo genetic distance, which is a prediction of the phenotypic one. The distribution of this distance given the pseudo genetic distance is established. Statistical properties of the predicted distance are derived when the parameters of the model are either given or estimated. We finally apply these results to distinguishing between 144 maize lines. This case study is very satisfactory because the use of anonymous molecular markers (RFLP) leads to saving 29% of the trials with an acceptable error risk. These results need to be confirmed on other varieties and species and would certainly be improved by using genes coding for phenotypic traits.  相似文献   
994.
Many companies are trying to get to the bottom of what their main objectives are and what their business should be doing. The new Six Sigma approach concentrates on clarifying business strategy and making sure that everything relates to company objectives. It is vital to clarify each part of the business in such a way that everyone can understand the causes of variation that can lead to improvements in processes and performance. This paper describes a situation where the full implementation of SPC methodology has made possible a visual and widely appreciated summary of the performance of one important aspect of the business. The major part of the work was identifying the core objectives and deciding how to encapsulate each of them in one or more suitable measurements. The next step was to review the practicalities of obtaining the measurements and their reliability and representativeness. Finally, the measurements were presented in chart form and the more traditional steps of SPC analysis were commenced. Data from fast changing business environments are prone to many different problems, such as the short previous span of typical data, strange distributions and other uncertainties. Issues surrounding these and the eventual extraction of a meaningful set of information will be discussed in the paper. The measurement framework has proved very useful and, from an initial circulation of a handful of people, it now forms an important part of an information process that provides responsible managers with valuable control information. The measurement framework is kept fresh and vital by constant review and modifications. Improved electronic data collection and dissemination of the report has proved very important.  相似文献   
995.
In biological experiments, multiple comparison test procedures may lead to a statistically significant difference in means. However, sometimes the difference is not worthy of attention considering the inherent variation in the characteristic. This may be due to the fact that the magnitude of the change in the characteristic under study after receiving the treatment is small, less than the natural biological variation. It then becomes the job of the statistician to design a test that will remove this paradox, such that the statistical significance will coincide with the biological one. The present paper develops a multiple comparison test for comparing two treatments with control by incorporating within-person variation in forming interval hypotheses. Assuming common variance (unknown) for the three groups (control and two treatments) and the width of the interval as intra-individual variation (known), the distribution of the test statistic is obtained as bivariate non-central t . A level f test procedure is designed. A table of critical values for carrying out the test is constructed for f = 0.05. The exact powers are computed for various values of small sample sizes and parameters. The test is powerful for all values of the parameters. The test was used to detect differences in zinc absorption for two cereal diets compared with a control diet. After application of our test, we arrived at the conclusion of homogeneity of diets with the control diet. Dunnett's procedure, when applied to the same data, concluded otherwise. The new test can also be applied to other data situations in biology, medicine and agriculture.  相似文献   
996.
Dilated cardiomyopathy is a disease of unknown cause characterized by dilation and impaired function of one or both ventricles. Most cases are believed to be sporadic, although familial forms have been detected. The familial form has been estimated to have a relative frequency of about 25%. Since, except for familial history, familial form has no other characteristics that could help in classifying the two diseases, the estimate of the frequency of the familial form should take into account a possible misclassification error. In our study, 100 cases were randomly selected in a prospective series of 350 patients. Out of them, 28 index cases were included in the analysis: 12 were known to be familial, and 88 were believed to be sporadic. After extensive clinical examination of the relatives, 3 patients supposed to have a sporadic form were found to have a familial form. 13 cases had a confirmed sporadic disease. Models in the Log-Linear Product class (LLP) have been used to separate classification errors from underlying patterns of disease incidence. The most conservative crude estimate of the misclassification error is 16.1% (CI 0.22- 23.27%), which leads to a crude estimate of the frequency of the familiar form of about 60%. An estimate of the disease frequency, adjusted for taking into consideration the sampling plan, is 40.93% (CI 32.29-44.17%). The results are consistent with the hypothesis that genetic factors are still underestimated, although they represent a major cause of the disease.  相似文献   
997.
A variety of methods of eliciting a prior distribution for a multivariate normal (MVN) distribution have recently been proposed. This paper reports an experiment in which 16 meteorologists used the methods to quantify their opinions about climatology variables. Our results compare prior models and show, in particular, that it can be better to assume the mean and variance of an MVN distribution are independent a priori, rather than to model opinion by the conjugate prior distribution. Using a proper scoring rule, different forms of assessment task are examined and alternative ways of estimating parameters are compared. To quantify opinion about means, it proved preferable to ask directly about the means rather than individual observations while, to quantify opinion about the variance matrix, it was best to ask about deviations from the mean. Further results include recommendations for the way parameters of the prior distribution are estimated.  相似文献   
998.
One of the main advantages of factorial experiments is the information that they can offer on interactions. When there are many factors to be studied, some or all of this information is often sacrificed to keep the size of an experiment economically feasible. Two strategies for group screening are presented for a large number of factors, over two stages of experimentation, with particular emphasis on the detection of interactions. One approach estimates only main effects at the first stage (classical group screening), whereas the other new method (interaction group screening) estimates both main effects and key two-factor interactions at the first stage. Three criteria are used to guide the choice of screening technique, and also the size of the groups of factors for study in the first-stage experiment. The criteria seek to minimize the expected total number of observations in the experiment, the probability that the size of the experiment exceeds a prespecified target and the proportion of active individual factorial effects which are not detected. To implement these criteria, results are derived on the relationship between the grouped and individual factorial effects, and the probability distributions of the numbers of grouped factors whose main effects or interactions are declared active at the first stage. Examples are used to illustrate the methodology, and some issues and open questions for the practical implementation of the results are discussed.  相似文献   
999.
On the distribution of the sum of independent uniform random variables   总被引:1,自引:0,他引:1  
Motivated by an application in change point analysis, we derive a closed form for the density function of the sum of n independent, non-identically distributed, uniform random variables.  相似文献   
1000.
In this paper, we consider noninformative priors for the ratio of variances in two normal populations. We develop first and second order matching priors. We find that the second order matching prior matches alternative coverage probabilities up to the second order and is also a HPD matching prior. It turns out that among the reference priors, only one-at-a-time reference prior satisfies a second order matching criterion. Our simulation study indicates that the one-at-a-time reference prior performs better than other reference priors in terms of matching the target coverage probabilities in a frequentist sense. This work is supported by Korea Research Foundation Grant (KRF-2004-002-C00041).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号