首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3322篇
  免费   99篇
  国内免费   39篇
管理学   282篇
劳动科学   2篇
民族学   18篇
人口学   181篇
丛书文集   170篇
理论方法论   110篇
综合类   1560篇
社会学   67篇
统计学   1070篇
  2024年   3篇
  2023年   15篇
  2022年   28篇
  2021年   38篇
  2020年   72篇
  2019年   63篇
  2018年   96篇
  2017年   128篇
  2016年   94篇
  2015年   77篇
  2014年   150篇
  2013年   448篇
  2012年   220篇
  2011年   197篇
  2010年   163篇
  2009年   169篇
  2008年   194篇
  2007年   206篇
  2006年   179篇
  2005年   185篇
  2004年   129篇
  2003年   122篇
  2002年   112篇
  2001年   88篇
  2000年   58篇
  1999年   41篇
  1998年   30篇
  1997年   31篇
  1996年   21篇
  1995年   25篇
  1994年   12篇
  1993年   11篇
  1992年   11篇
  1991年   13篇
  1990年   9篇
  1989年   4篇
  1988年   5篇
  1987年   2篇
  1986年   2篇
  1985年   1篇
  1984年   3篇
  1982年   4篇
  1979年   1篇
排序方式: 共有3460条查询结果,搜索用时 15 毫秒
71.
In the present paper we have proposed a Bayesian approach for making inferences from accelerated life tests which do not require distributional assumptions  相似文献   
72.
On a multiple choice test in which each item has r alternative options, a given number c of which are correct, various scoring models have been proposed. In one case the test-taker is allowed to choose any size solution subset and he/she is graded according to whether the subset is small and according to how many correct answers the subset contains. In a second case the test-taker is allowed to select only solution subsets of a prespecified maximum size and is graded as above. The first case is analogous to the situation where the test-taker is given a set of r options with each question; each question calls for a solution which consists of selecting that subset of the r responses which he/she believes to be correct. In the second case, when the prespecified solution subset is restricted to be of size at most one, the resulting scoring model corresponds to the usual model, referred to below as standard. The number c of correct options per item is usually known to the test-taker in this case.

Scoring models are evaluated according to how well they correctly identify the total scores of the individuals in the class of test-takers. Loss functions are constructed which penalize scoring models resulting in student scores which are not associated with the students true (or average) total score on the exam. Scoring models are compared on the basis of cross-validated assessments of the loss incurred by using each of the given models. It is shown that in many cases the assessment of the loss for scoring models which allow students the opportunity to choose more than one option for each question are smaller than the assessment of the loss for the standard scoring model.  相似文献   
73.
The failure rate r(t) is assumed to have the shape of the"first"part of the"bathtub"model, i.e.r(t) is non-increasing for t<r and is constant for t> r. Asymptotic distribution of one of the estimates proposed earlier has been investigated in this paper. This leads to a test for the hypothesis HQ r<r 0 vs H :r>r (where TQ > 0). Asymptotic expression for the power of this test under Pitman alternatives is derived. Some simulations are reported.  相似文献   
74.
The own-children method (OCM) applied to the Italian Labour Force Survey (ILFS) is an alternative way to give information on fertility for the years before the survey. By deriving children information and the population at risk on the basis of parents’ characteristics, a large-scale dataset for fertility analysis in Italy becomes available, also to reconstruct event histories. The quality assessment provided by comparing the total fertility rate (TFR) calculated on ILFS with the official regional and national TFRs by ISTAT gives us usable outcomes.  相似文献   
75.
A Bayesian discovery procedure   总被引:1,自引:0,他引:1  
Summary.  We discuss a Bayesian discovery procedure for multiple-comparison problems. We show that, under a coherent decision theoretic framework, a loss function combining true positive and false positive counts leads to a decision rule that is based on a threshold of the posterior probability of the alternative. Under a semiparametric model for the data, we show that the Bayes rule can be approximated by the optimal discovery procedure, which was recently introduced by Storey. Improving the approximation leads us to a Bayesian discovery procedure, which exploits the multiple shrinkage in clusters that are implied by the assumed non-parametric model. We compare the Bayesian discovery procedure and the optimal discovery procedure estimates in a simple simulation study and in an assessment of differential gene expression based on microarray data from tumour samples. We extend the setting of the optimal discovery procedure by discussing modifications of the loss function that lead to different single-thresholding statistics. Finally, we provide an application of the previous arguments to dependent (spatial) data.  相似文献   
76.
Abstract.  A new multiple testing procedure, the generalized augmentation procedure (GAUGE), is introduced. The procedure is shown to control the false discovery exceedance and to be competitive in terms of power. It is also shown how to apply the idea of GAUGE to achieve control of other error measures. Extensions to dependence are discussed, together with a modification valid under arbitrary dependence. We present an application to an original study on prostate cancer and on a benchmark data set on colon cancer.  相似文献   
77.
In analogy with the cumulative residual entropy recently proposed by Wang et al. [2003a. A new and robust information theoretic measure and its application to image alignment. In: Information Processing in Medical Imaging. Lecture Notes in Computer Science, vol. 2732, Springer, Heidelberg, pp. 388–400; 2003b. Cumulative residual entropy, a new measure of information and its application to image alignment. In: Proceedings on the Ninth IEEE International Conference on Computer Vision (ICCV’03), vol. 1, IEEE Computer Society Press, Silver Spring, MD, pp. 548–553], we introduce and study the cumulative entropy, which is a new measure of information alternative to the classical differential entropy. We show that the cumulative entropy of a random lifetime X can be expressed as the expectation of its mean inactivity time evaluated at X. Hence, our measure is particularly suitable to describe the information in problems related to ageing properties of reliability theory based on the past and on the inactivity times. Our results include various bounds to the cumulative entropy, its connection to the proportional reversed hazards model, and the study of its dynamic version that is shown to be increasing if the mean inactivity time is increasing. The empirical cumulative entropy is finally proposed to estimate the new information measure.  相似文献   
78.
Traditional multiple hypothesis testing procedures fix an error rate and determine the corresponding rejection region. In 2002 Storey proposed a fixed rejection region procedure and showed numerically that it can gain more power than the fixed error rate procedure of Benjamini and Hochberg while controlling the same false discovery rate (FDR). In this paper it is proved that when the number of alternatives is small compared to the total number of hypotheses, Storey's method can be less powerful than that of Benjamini and Hochberg. Moreover, the two procedures are compared by setting them to produce the same FDR. The difference in power between Storey's procedure and that of Benjamini and Hochberg is near zero when the distance between the null and alternative distributions is large, but Benjamini and Hochberg's procedure becomes more powerful as the distance decreases. It is shown that modifying the Benjamini and Hochberg procedure to incorporate an estimate of the proportion of true null hypotheses as proposed by Black gives a procedure with superior power.  相似文献   
79.
For many diseases, logistic constraints render large incidence studies difficult to carry out. This becomes a drawback, particularly when a new study is needed each time the incidence rate is investigated in a new population. By carrying out a prevalent cohort study with follow‐up it is possible to estimate the incidence rate if it is constant. The authors derive the maximum likelihood estimator (MLE) of the overall incidence rate, λ, as well as age‐specific incidence rates, by exploiting the epidemiologic relationship, (prevalence odds) = (incidence rate) × (mean duration) (P/[1 ? P] = λ × µ). The authors establish the asymptotic distributions of the MLEs and provide approximate confidence intervals for the parameters. Moreover, the MLE of λ is asymptotically most efficient and is the natural estimator obtained by substituting the marginal maximum likelihood estimators for P and µ into P/[1 ? P] = λ × µ. Following‐up the subjects allows the authors to develop these widely applicable procedures. The authors apply their methods to data collected as part of the Canadian Study of Health and Ageing to estimate the incidence rate of dementia amongst elderly Canadians. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   
80.
For given real functionsg andh, first we give necessary and sufficient conditions such that there exists a random variableX satisfying thatE(g(X)|X≥y)=h(y)r x (y),∀y ∈ C x , whereC x andT X are the support and the failure rate function ofX, respectively. These extend the results of Ruiz and Navarro (1994) and Ghitany et al. (1995). Next we investigate necessary and sufficient conditions such thath(y)=E(g(X)|X≥y), for a given functionh. Support for this research was provided in part by the National Science Council of the Republic of China, Grant No. NSC 86-2115-M-110-014 and NSC 88-2118-M-110-001  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号