全文获取类型
收费全文 | 8640篇 |
免费 | 181篇 |
国内免费 | 33篇 |
专业分类
管理学 | 354篇 |
民族学 | 26篇 |
人口学 | 146篇 |
丛书文集 | 252篇 |
理论方法论 | 117篇 |
综合类 | 1909篇 |
社会学 | 85篇 |
统计学 | 5965篇 |
出版年
2024年 | 6篇 |
2023年 | 34篇 |
2022年 | 63篇 |
2021年 | 73篇 |
2020年 | 128篇 |
2019年 | 254篇 |
2018年 | 282篇 |
2017年 | 561篇 |
2016年 | 188篇 |
2015年 | 220篇 |
2014年 | 294篇 |
2013年 | 2455篇 |
2012年 | 651篇 |
2011年 | 342篇 |
2010年 | 266篇 |
2009年 | 284篇 |
2008年 | 308篇 |
2007年 | 296篇 |
2006年 | 254篇 |
2005年 | 265篇 |
2004年 | 215篇 |
2003年 | 196篇 |
2002年 | 186篇 |
2001年 | 180篇 |
2000年 | 150篇 |
1999年 | 102篇 |
1998年 | 95篇 |
1997年 | 61篇 |
1996年 | 53篇 |
1995年 | 47篇 |
1994年 | 33篇 |
1993年 | 41篇 |
1992年 | 32篇 |
1991年 | 28篇 |
1990年 | 31篇 |
1989年 | 29篇 |
1988年 | 19篇 |
1987年 | 18篇 |
1986年 | 9篇 |
1985年 | 14篇 |
1984年 | 15篇 |
1983年 | 22篇 |
1982年 | 10篇 |
1981年 | 8篇 |
1980年 | 7篇 |
1979年 | 7篇 |
1978年 | 9篇 |
1977年 | 6篇 |
1976年 | 3篇 |
1975年 | 4篇 |
排序方式: 共有8854条查询结果,搜索用时 62 毫秒
91.
Finite mixture models with concomitant information: assessing diagnostic criteria for diabetes 总被引:1,自引:0,他引:1
T. J. Thompson P. J. Smith & J. P. Boyle 《Journal of the Royal Statistical Society. Series C, Applied statistics》1998,46(3):393-404
The World Health Organization (WHO) diagnostic criteria for diabetes mellitus were determined in part by evidence that in some populations the plasma glucose level 2 h after an oral glucose load is a mixture of two distinct distributions. We present a finite mixture model that allows the two component densities to be generalized linear models and the mixture probability to be a logistic regression model. The model allows us to estimate the prevalence of diabetes and sensitivity and specificity of the diagnostic criteria as a function of covariates and to estimate them in the absence of an external standard. Sensitivity is the probability that a test indicates disease conditionally on disease being present. Specificity is the probability that a test indicates no disease conditionally on no disease being present. We obtained maximum likelihood estimates via the EM algorithm and derived the standard errors from the information matrix and by the bootstrap. In the application to data from the diabetes in Egypt project a two-component mixture model fits well and the two components are interpreted as normal and diabetes. The means and variances are similar to results found in other populations. The minimum misclassification cutpoints decrease with age, are lower in urban areas and are higher in rural areas than the 200 mg dl-1 cutpoint recommended by the WHO. These differences are modest and our results generally support the WHO criterion. Our methods allow the direct inclusion of concomitant data whereas past analyses were based on partitioning the data. 相似文献
92.
Approximation of a density by another density is considered in the case of different dimensionalities of the distributions. The results have been derived by inverting expansions of characteristic functions with the help of matrix techniques. The approximations obtained are all functions of cumulant differences and derivatives of the approximating density. The multivariate Edgeworth expansion follows from the results as a special case. Furthermore, the density functions of the trace and eigenvalues of the sample covariance matrix are approximated by the multivariate normal density and a numerical example is given 相似文献
93.
A K -sample testing problem is studied for multivariate counting processes with time-dependent frailty. Asymptotic distributions and efficiency of a class of non-parametric test statistics are established for certain local alternatives. The concept of efficiency is to show that for every non-parametric test in this class, there is a parametric submodel for which the optimal test has the same asymptotic power as the non-parametric one. The theory is applied to analyse a diabetic retinopathy study data set. A simulation study is also presented to illustrate the theory 相似文献
94.
LetX1,X2, ..., be real-valued random variables forming a strictly stationary sequence, and satisfying the basic requirement of being either pairwise positively quadrant dependent or pairwise negatively quadrant dependent. LetF^ be the marginal distribution function of theXips, which is estimated by the empirical distribution functionFn and also by a smooth kernel-type estimateFn, by means of the segmentX1, ...,Xn. These estimates are compared on the basis of their mean squared errors (MSE). The main results of this paper are the following. Under certain regularity conditions, the optimal bandwidth (in the MSE sense) is determined, and is found to be the same as that in the independent identically distributed case. It is also shown thatn MSE(Fn(t)) andnMSE (F^n(t)) tend to the same constant, asn→∞ so that one can not discriminate be tween the two estimates on the basis of the MSE. Next, ifi(n) = min {k∈{1, 2, ...}; MSE (Fk(t)) ≤ MSE (Fn(t))}, then it is proved thati(n)/n tends to 1, asn→∞. Thus, once again, one can not choose one estimate over the other in terms of their asymptotic relative efficiency. If, however, the squared bias ofF^n(t) tends to 0 sufficiently fast, or equivalently, the bandwidthhn satisfies the requirement thatnh3n→ 0, asn→∞, it is shown that, for a suitable choice of the kernel, (i(n) ?n)/(nhn) tends to a positive number, asn→∞ It follows that the deficiency ofFn(t) with respect toF^n(t),i(n) ?n, is substantial, and, actually, tends to ∞, asn→∞. In terms of deficiency, the smooth estimateF^n(t) is preferable to the empirical distribution functionFn(t) 相似文献
95.
The posterior distribution of the likelihood is used to interpret the evidential meaning of P-values, posterior Bayes factors and Akaike's information criterion when comparing point null hypotheses with composite alternatives. Asymptotic arguments lead to simple re-calibrations of these criteria in terms of posterior tail probabilities of the likelihood ratio. (Prior) Bayes factors cannot be calibrated in this way as they are model-specific. 相似文献
96.
The maximum likelihood estimation for the critical points of the failure rate and the mean residual life function are presented
in the case of mixture inverse Gaussian model. Several important data sets are analyzed from this point of view. For each
of the data sets, Bootstrapping is used to construct confidence intervals of the critical points. 相似文献
97.
中国股市收益率分布函数研究 总被引:14,自引:6,他引:14
本文在考察了文献中描述股票收益率的各类分布函数的基础上,以稳定Paretian分布与t分布为备择,研究了沪、深股市各类综指收益率的分布函数的形式,并对分布函数的参数进行了估计。 相似文献
98.
On the Effect of Probability Distributions of Input Variables in Public Health Risk Assessment 总被引:1,自引:0,他引:1
A central part of probabilistic public health risk assessment is the selection of probability distributions for the uncertain input variables. In this paper, we apply the first-order reliability method (FORM)(1–3) as a probabilistic tool to assess the effect of probability distributions of the input random variables on the probability that risk exceeds a threshold level (termed the probability of failure) and on the relevant probabilistic sensitivities. The analysis was applied to a case study given by Thompson et al. (4) on cancer risk caused by the ingestion of benzene contaminated soil. Normal, lognormal, and uniform distributions were used in the analysis. The results show that the selection of a probability distribution function for the uncertain variables in this case study had a moderate impact on the probability that values would fall above a given threshold risk when the threshold risk is at the 50th percentile of the original distribution given by Thompson et al. (4) The impact was much greater when the threshold risk level was at the 95th percentile. The impact on uncertainty sensitivity, however, showed a reversed trend, where the impact was more appreciable for the 50th percentile of the original distribution of risk given by Thompson et al. 4 than for the 95th percentile. Nevertheless, the choice of distribution shape did not alter the order of probabilistic sensitivity of the basic uncertain variables. 相似文献
99.
Estimation from Zero-Failure Data 总被引:2,自引:0,他引:2
Robert T. Bailey 《Risk analysis》1997,17(3):375-380
When performing quantitative (or probabilistic) risk assessments, it is often the case that data for many of the potential events in question are sparse or nonexistent. Some of these events may be well-represented by the binomial probability distribution. In this paper, a model for predicting the binomial failure probability, P , from data that include no failures is examined. A review of the literature indicates that the use of this model is currently limited to risk analysis of energetic initiation in the explosives testing field. The basis for the model is discussed, and the behavior of the model relative to other models developed for the same purpose is investigated. It is found that the qualitative behavior of the model is very similar to that of the other models, and for larger values of n (the number of trials), the predicted P values varied by a factor of about eight among the five models examined. Analysis revealed that the estimator is nearly identical to the median of a Bayesian posterior distribution, derived using a uniform prior. An explanation of the application of the estimator in explosives testing is provided, and comments are offered regarding the use of the estimator versus other possible techniques. 相似文献
100.
Clemens Heuberger 《Journal of Combinatorial Optimization》2004,8(3):329-361
Given a (combinatorial) optimization problem and a feasible solution to it, the corresponding inverse optimization problem is to find a minimal adjustment of the cost function such that the given solution becomes optimum.Several such problems have been studied in the last twelve years. After formalizing the notion of an inverse problem and its variants, we present various methods for solving them. Then we discuss the problems considered in the literature and the results that have been obtained. Finally, we formulate some open problems. 相似文献