全文获取类型
收费全文 | 31068篇 |
免费 | 572篇 |
国内免费 | 3篇 |
专业分类
管理学 | 4367篇 |
民族学 | 156篇 |
人才学 | 2篇 |
人口学 | 2778篇 |
丛书文集 | 121篇 |
教育普及 | 1篇 |
理论方法论 | 2920篇 |
综合类 | 308篇 |
社会学 | 15515篇 |
统计学 | 5475篇 |
出版年
2023年 | 168篇 |
2021年 | 166篇 |
2020年 | 493篇 |
2019年 | 703篇 |
2018年 | 801篇 |
2017年 | 1043篇 |
2016年 | 845篇 |
2015年 | 584篇 |
2014年 | 815篇 |
2013年 | 5365篇 |
2012年 | 1022篇 |
2011年 | 950篇 |
2010年 | 742篇 |
2009年 | 659篇 |
2008年 | 826篇 |
2007年 | 769篇 |
2006年 | 801篇 |
2005年 | 728篇 |
2004年 | 738篇 |
2003年 | 619篇 |
2002年 | 657篇 |
2001年 | 724篇 |
2000年 | 655篇 |
1999年 | 634篇 |
1998年 | 526篇 |
1997年 | 455篇 |
1996年 | 447篇 |
1995年 | 472篇 |
1994年 | 433篇 |
1993年 | 441篇 |
1992年 | 472篇 |
1991年 | 491篇 |
1990年 | 474篇 |
1989年 | 405篇 |
1988年 | 426篇 |
1987年 | 353篇 |
1986年 | 382篇 |
1985年 | 416篇 |
1984年 | 389篇 |
1983年 | 351篇 |
1982年 | 302篇 |
1981年 | 244篇 |
1980年 | 279篇 |
1979年 | 307篇 |
1978年 | 259篇 |
1977年 | 217篇 |
1976年 | 203篇 |
1975年 | 174篇 |
1974年 | 186篇 |
1973年 | 137篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
981.
Edimilson Batista dos Santos Nelson F. F. Ebecken Estevam R. Hruschka Jr. Ali Elkamel Chandra M. R. Madhuranthakam 《Risk analysis》2014,34(3):485-497
Fault diagnosis includes the main task of classification. Bayesian networks (BNs) present several advantages in the classification task, and previous works have suggested their use as classifiers. Because a classifier is often only one part of a larger decision process, this article proposes, for industrial process diagnosis, the use of a Bayesian method called dynamic Markov blanket classifier that has as its main goal the induction of accurate Bayesian classifiers having dependable probability estimates and revealing actual relationships among the most relevant variables. In addition, a new method, named variable ordering multiple offspring sampling capable of inducing a BN to be used as a classifier, is presented. The performance of these methods is assessed on the data of a benchmark problem known as the Tennessee Eastman process. The obtained results are compared with naive Bayes and tree augmented network classifiers, and confirm that both proposed algorithms can provide good classification accuracies as well as knowledge about relevant variables. 相似文献
982.
Despite the fact that sickle-cell disease (SCD) is perhaps the most “racialized” medical condition in the USA, very little is known about how “race” impacts public support for health policies related to the condition. We embedded an experiment within the 2011 Cooperative Congressional Election Study in order to assess perceptions about SCD among 1250 participants from diverse backgrounds and evaluate the extent to which these perceptions were associated with support for government spending on SCD-related benefits. We manipulated the racial phenotype of SCD advocates who requested additional government funding and asked participants to indicate how much the government should provide. Overall, participants expressed moderately positive attitudes about SCD, and there were no differences in funding support based on the race of the advocate. However, white participants supported less funding compared to nonwhite participants, even after adjusting for a number of demographic and attitudinal covariates. These findings suggest that a complex relationship between racial identification and implicit racism may shape public perceptions about SCD that negatively influences perceivers’ support for SCD-related policy. 相似文献
983.
We measure the relative ideological positions of newspapers, voters, interest groups, and political parties, using data on ballot propositions. We exploit the fact that newspapers, parties, and interest groups take positions on these propositions, and the fact that citizens ultimately vote on them. We find that, on average, newspapers in the United States are located almost exactly at the median voter in their states—that is, they are balanced around the median voter. Still, there is a significant amount of ideological heterogeneity across newspapers, which is smaller than the one found for interest groups. However, when we group propositions by issue area, we find a sizable amount of ideological imbalance: broadly speaking, newspapers are to the left of the state‐level median voter on many social issues, and to the right on many economic issues. To complete the picture, we use two existing methods of measuring bias and show that the news and editorial sections of newspapers have almost identical partisan positions. 相似文献
984.
Nolan A. Wages Alexia Iasonos John O'Quigley Mark R. Conaway 《Pharmaceutical statistics》2020,19(2):137-144
This paper studies the notion of coherence in interval‐based dose‐finding methods. An incoherent decision is either (a) a recommendation to escalate the dose following an observed dose‐limiting toxicity or (b) a recommendation to deescalate the dose following a non–dose‐limiting toxicity. In a simulated example, we illustrate that the Bayesian optimal interval method and the Keyboard method are not coherent. We generated dose‐limiting toxicity outcomes under an assumed set of true probabilities for a trial of n=36 patients in cohorts of size 1, and we counted the number of incoherent dosing decisions that were made throughout this simulated trial. Each of the methods studied resulted in 13/36 (36%) incoherent decisions in the simulated trial. Additionally, for two different target dose‐limiting toxicity rates, 20% and 30%, and a sample size of n=30 patients, we randomly generated 100 dose‐toxicity curves and tabulated the number of incoherent decisions made by each method in 1000 simulated trials under each curve. For each method studied, the probability of incurring at least one incoherent decision during the conduct of a single trial is greater than 75%. Coherency is an important principle in the conduct of dose‐finding trials. Interval‐based methods violate this principle for cohorts of size 1 and require additional modifications to overcome this shortcoming. Researchers need to take a closer look at the dose assignment behavior of interval‐based methods when using them to plan dose‐finding studies. 相似文献
985.
Nonparametric Estimation of the Number of Drug Users in Hong Kong Using Repeated Multiple Lists 下载免费PDF全文
Richard M. Huggins Paul S.F. Yip Jakub Stoklosa 《Australian & New Zealand Journal of Statistics》2016,58(1):1-13
We update a previous approach to the estimation of the size of an open population when there are multiple lists at each time point. Our motivation is 35 years of longitudinal data on the detection of drug users by the Central Registry of Drug Abuse in Hong Kong. We develop a two‐stage smoothing spline approach. This gives a flexible and easily implemented alternative to the previous method which was based on kernel smoothing. The new method retains the property of reducing the variability of the individual estimates at each time point. We evaluate the new method by means of a simulation study that includes an examination of the effects of variable selection. The new method is then applied to data collected by the Central Registry of Drug Abuse. The parameter estimates obtained are compared with the well known Jolly–Seber estimates based on single capture methods. 相似文献
986.
The class of affine LIBOR models is appealing since it satisfies three central requirements of interest rate modeling. It is arbitrage-free, interest rates are nonnegative, and caplet and swaption prices can be calculated analytically. In order to guarantee nonnegative interest rates affine LIBOR models are driven by nonnegative affine processes, a restriction that makes it hard to produce volatility smiles. We modify the affine LIBOR models in such a way that real-valued affine processes can be used without destroying the nonnegativity of interest rates. Numerical examples show that in this class of models, pronounced volatility smiles are possible. 相似文献
987.
988.
Two new nonparametric common principal component model selection procedures based on bootstrap distributions of the vector correlations of all combinations of the eigenvectors from two groups are proposed. The performance of these methods is compared in a simulation study to the two parametric methods previously suggested by Flury in 1988, as well as modified versions of two nonparametric methods proposed by Klingenberg in 1996 and then by Klingenberg and McIntyre in 1998. The proposed bootstrap vector correlation distribution (BVD) method is shown to outperform all of the existing methods in most of the simulated situations considered. 相似文献
989.
In nonregular problems where the conventional \(n\) out of \(n\) bootstrap is inconsistent, the \(m\) out of \(n\) bootstrap provides a useful remedy to restore consistency. Conventionally, optimal choice of the bootstrap sample size \(m\) is taken to be the minimiser of a frequentist error measure, estimation of which has posed a major difficulty hindering practical application of the \(m\) out of \(n\) bootstrap method. Relatively little attention has been paid to a stronger, stochastic, version of the optimal bootstrap sample size, defined as the minimiser of an error measure calculated directly from the observed sample. Motivated by this stronger notion of optimality, we develop procedures for calculating the stochastically optimal value of \(m\). Our procedures are shown to work under special forms of Edgeworth-type expansions which are typically satisfied by statistics of the shrinkage type. Theoretical and empirical properties of our methods are illustrated with three examples, namely the James–Stein estimator, the ridge regression estimator and the post-model-selection regression estimator. 相似文献
990.
T. Chen K. Knox J. Arora W. Tang J. Kowalski X.M. Tu 《Journal of applied statistics》2016,43(6):979-995
Power analysis for multi-center randomized control trials is quite difficult to perform for non-continuous responses when site differences are modeled by random effects using the generalized linear mixed-effects model (GLMM). First, it is not possible to construct power functions analytically, because of the extreme complexity of the sampling distribution of parameter estimates. Second, Monte Carlo (MC) simulation, a popular option for estimating power for complex models, does not work within the current context because of a lack of methods and software packages that would provide reliable estimates for fitting such GLMMs. For example, even statistical packages from software giants like SAS do not provide reliable estimates at the time of writing. Another major limitation of MC simulation is the lengthy running time, especially for complex models such as GLMM, especially when estimating power for multiple scenarios of interest. We present a new approach to address such limitations. The proposed approach defines a marginal model to approximate the GLMM and estimates power without relying on MC simulation. The approach is illustrated with both real and simulated data, with the simulation study demonstrating good performance of the method. 相似文献