全文获取类型
收费全文 | 881篇 |
免费 | 69篇 |
国内免费 | 1篇 |
专业分类
管理学 | 187篇 |
民族学 | 5篇 |
人口学 | 21篇 |
丛书文集 | 30篇 |
理论方法论 | 29篇 |
综合类 | 135篇 |
社会学 | 61篇 |
统计学 | 483篇 |
出版年
2023年 | 3篇 |
2022年 | 3篇 |
2021年 | 11篇 |
2020年 | 10篇 |
2019年 | 35篇 |
2018年 | 20篇 |
2017年 | 31篇 |
2016年 | 8篇 |
2015年 | 21篇 |
2014年 | 21篇 |
2013年 | 208篇 |
2012年 | 64篇 |
2011年 | 44篇 |
2010年 | 39篇 |
2009年 | 35篇 |
2008年 | 48篇 |
2007年 | 30篇 |
2006年 | 23篇 |
2005年 | 23篇 |
2004年 | 27篇 |
2003年 | 20篇 |
2002年 | 14篇 |
2001年 | 18篇 |
2000年 | 7篇 |
1999年 | 7篇 |
1998年 | 6篇 |
1997年 | 3篇 |
1996年 | 1篇 |
1995年 | 6篇 |
1994年 | 11篇 |
1993年 | 14篇 |
1992年 | 17篇 |
1991年 | 17篇 |
1990年 | 18篇 |
1989年 | 17篇 |
1988年 | 10篇 |
1987年 | 1篇 |
1986年 | 7篇 |
1985年 | 7篇 |
1984年 | 9篇 |
1983年 | 3篇 |
1982年 | 12篇 |
1981年 | 13篇 |
1980年 | 8篇 |
1978年 | 1篇 |
排序方式: 共有951条查询结果,搜索用时 0 毫秒
151.
The potential of neural networks for classification problems has been established by numerous successful applications reported in the literature. One of the major assumptions used in almost all studies is the equal cost consequence of misclassification. With this assumption, minimizing the total number of misclassification errors is the sole objective in developing a neural network classifier. Often this is done simply to ease model development and the selection of classification decision points. However, it is not appropriate for many real situations such as quality assurance, direct marketing, bankruptcy prediction, and medical diagnosis where misclassification costs have unequal consequences for different categories. In this paper, we investigate the issue of unequal misclassification costs in neural network classifiers. Through an application in thyroid disease diagnosis, we find that different cost considerations have significant effects on the classification performance and that appropriate use of cost information can aid in optimal decision making. A cross-validation technique is employed to alleviate the problem of bias in the training set and to examine the robustness of neural network classifiers with regard to sampling variations and cost differences. 相似文献
152.
Statistical agencies have conflicting obligations to protect confidential information provided by respondents to surveys or censuses and to make data available for research and planning activities. When the microdata themselves are to be released, in order to achieve these conflicting objectives, statistical agencies apply statistical disclosure limitation (SDL) methods to the data, such as noise addition, swapping or microaggregation. Some of these methods do not preserve important structure and constraints in the data, such as positivity of some attributes or inequality constraints between attributes. Failure to preserve constraints is not only problematic in terms of data utility, but also may increase disclosure risk.In this paper, we describe a method for SDL that preserves both positivity of attributes and the mean vector and covariance matrix of the original data. The basis of the method is to apply multiplicative noise with the proper, data-dependent covariance structure. 相似文献
153.
Willem Albers 《Journal of statistical planning and inference》2011,141(9):3151-3159
For attribute data with (very) small failure rates often control charts are used which decide whether to stop or to continue each time r failures have occurred, for some r?1. Because of the small probabilities involved, such charts are very sensitive to estimation effects. This is true in particular if the underlying failure rate varies and hence the distributions involved are not geometric. Such a situation calls for a nonparametric approach, but this may require far more Phase I observations than are typically available in practice. In the present paper it is shown how this obstacle can be effectively overcome by looking not at the sum but rather at the maximum of each group of size r. 相似文献
154.
155.
Ernesto J. Veres-Ferrer 《统计学通讯:理论与方法》2017,46(17):8631-8646
Elasticity (or elasticity function) is a new concept that allows us to characterize the probability distribution of any random variable in the same way as characteristic functions and hazard and reverse hazard functions do. Initially defined for continuous variables, it was necessary to extend the definition of elasticity and study its properties in the case of discrete variables. A first attempt to define discrete elasticity is seen in Veres-Ferrer and Pavía (2014a). This paper develops this definition and makes a comparative study of its properties, relating them to the properties shown by discrete hazard and reverse hazard, as both defined in Chechile (2011). Similar to continuous elasticity, one of the most interesting properties of discrete elasticity focuses on the rate of change that this undergoes throughout its support. This paper centers on the study of the rate of change and develops a set of properties that allows us to carry out a detailed analysis. Finally, it addresses the calculation of the elasticity for the resulting variable obtained from discretizing a continuous random variable, distinguishing whether its domain is in real positives or negatives. 相似文献
156.
Stephen T. Ziliak 《The American statistician》2019,73(1):281-290
AbstractA crisis of validity has emerged from three related crises of science, that is, the crises of statistical significance and complete randomization, of replication, and of reproducibility. Guinnessometrics takes commonplace assumptions and methods of statistical science and stands them on their head, from little p-values to unstructured Big Data. Guinnessometrics focuses instead on the substantive significance which emerges from a small series of independent and economical yet balanced and repeated experiments. Originally developed and market-tested by William S. Gosset aka “Student” in his job as Head Experimental Brewer at the Guinness Brewery in Dublin, Gosset’s economic and common sense approach to statistical inference and scientific method has been unwisely neglected. In many areas of science and life, the 10 principles of Guinnessometrics or G-values outlined here can help. Other things equal, the larger the G-values, the better the science and judgment. By now a colleague, neighbor, or YouTube junkie has probably shown you one of those wacky psychology experiments in a video involving a gorilla, and testing the limits of human cognition. In one video, a person wearing a gorilla suit suddenly appears on the scene among humans, who are themselves engaged in some ordinary, mundane activity such as passing a basketball. The funny thing is, prankster researchers have discovered, when observers are asked to think about the mundane activity (such as by counting the number of observed passes of a basketball), the unexpected gorilla is frequently unseen (for discussion see Kahneman 2011). The gorilla is invisible. People don’t see it. 相似文献
157.
Stephen J. Ruberg Frank E. Harrell Jr. Margaret Gamalo-Siebers Lisa LaVange J. Jack Lee Karen Price 《The American statistician》2019,73(1):319-327
ABSTRACTThe cost and time of pharmaceutical drug development continue to grow at rates that many say are unsustainable. These trends have enormous impact on what treatments get to patients, when they get them and how they are used. The statistical framework for supporting decisions in regulated clinical development of new medicines has followed a traditional path of frequentist methodology. Trials using hypothesis tests of “no treatment effect” are done routinely, and the p-value < 0.05 is often the determinant of what constitutes a “successful” trial. Many drugs fail in clinical development, adding to the cost of new medicines, and some evidence points blame at the deficiencies of the frequentist paradigm. An unknown number effective medicines may have been abandoned because trials were declared “unsuccessful” due to a p-value exceeding 0.05. Recently, the Bayesian paradigm has shown utility in the clinical drug development process for its probability-based inference. We argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials so that synthesized evidence across trials can be utilized to compute probability statements that are valuable for understanding the magnitude of treatment effect. Such a Bayesian paradigm provides a promising framework for improving statistical inference and regulatory decision making. 相似文献
158.
We consider approximate inference in hybrid Bayesian Networks (BNs) and present a new iterative algorithm that efficiently
combines dynamic discretization with robust propagation algorithms on junction trees. Our approach offers a significant extension
to Bayesian Network theory and practice by offering a flexible way of modeling continuous nodes in BNs conditioned on complex
configurations of evidence and intermixed with discrete nodes as both parents and children of continuous nodes. Our algorithm
is implemented in a commercial Bayesian Network software package, AgenaRisk, which allows model construction and testing to
be carried out easily. The results from the empirical trials clearly show how our software can deal effectively with different
type of hybrid models containing elements of expert judgment as well as statistical inference. In particular, we show how
the rapid convergence of the algorithm towards zones of high probability density, make robust inference analysis possible
even in situations where, due to the lack of information in both prior and data, robust sampling becomes unfeasible. 相似文献
159.
Toward Defining the Causal Role of Consciousness: Using Models of Memory and Moral Judgment from Cognitive Neuroscience to Expand the Sociological Dual‐Process Model 下载免费PDF全文
Luis Antonio Vila‐Henninger 《Journal for the theory of social behaviour》2015,45(2):238-260
What role does “discursive consciousness” play in decision‐making? How does it interact with “practical consciousness?” These two questions constitute two important gaps in strong practice theory that extend from Pierre Bourdieu's habitus to Stephen Vaisey's sociological dual‐process model and beyond. The goal of this paper is to provide an empirical framework that expands the sociological dual‐process model in order to fill these gaps using models from cognitive neuroscience. In particular, I use models of memory and moral judgment that highlight the importance of executive functions and semantic memory. I outline each model as it pertains to the aforementioned gaps in strong practice theory. I then use the models from cognitive neuroscience to create an expanded dual‐process model that addresses how and when conscious mental systems override and interact with subconscious mental systems in the use of cultural ends for decision‐making. Finally, using this expanded model I address the sociological debate over the use of interview and survey data. My analysis reveals that surveys and interviews both elicit information encoded in declarative memory and differ primarily in the process of information retrieval that is required of respondents. 相似文献
160.
《Journal of Statistical Computation and Simulation》2012,82(10):813-829
The non-central gamma distribution can be regarded as a general form of non-central χ2 distributions whose computations were thoroughly investigated (Ruben, H., 1974, Non-central chi-square and gamma revisited. Communications in Statistics, 3(7), 607–633; Knüsel, L., 1986, Computation of the chi-square and Poisson distribution. SIAM Journal on Scientific and Statistical Computing, 7, 1022–1036; Voit, E.O. and Rust, P.F., 1987, Noncentral chi-square distributions computed by S-system differential equations. Proceedings of the Statistical Computing Section, ASA, pp. 118–121; Rust, P.F. and Voit, E.O., 1990, Statistical densities, cumulatives, quantiles, and power obtained by S-systems differential equations. Journal of the American Statistical Association, 85, 572–578; Chattamvelli, R., 1994, Another derivation of two algorithms for the noncentral χ2 and F distributions. Journal of Statistical Computation and Simulation, 49, 207–214; Johnson, N.J., Kotz, S. and Balakrishnan, N., 1995, Continuous Univariate Distributions, Vol. 2 (2nd edn) (New York: Wiley). Both distributional function forms are usually in terms of weighted infinite series of the central one. The ad hoc approximations to cumulative probabilities of non-central gamma were extended or discussed by Chattamvelli, Knüsel and Bablok (Knüsel, L. and Bablok, B., 1996, Computation of the noncentral gamma distribution. SIAM Journal on Scientific Computing, 17, 1224–1231), and Ruben (Ruben, H., 1974, Non-central chi-square and gamma revisited. Communications in Statistics, 3(7), 607–633). However, they did not implement and demonstrate proposed numerical procedures. Approximations to non-central densities and quantiles are not available. In addition, its S-system formulation has not been derived. Here, approximations to cumulative probabilities, density, and quantiles based on the method of Knüsel and Bablok are derived and implemented in R codes. Furthermore, two alternate S-system forms are recast on the basis of techniques of Savageau and Voit (Savageau, M.A. and Voit, E.O., 1987, Recasting nonlinear differential equations as S-systems: A canonical nonlinear form. Mathematical Biosciences, 87, 83–115) as well as Chen (Chen, Z.-Y., 2003, Computing the distribution of the squared sample multiple correlation coefficient with S-Systems. Communications in Statistics—Simulation and Computation, 32(3), 873–898.) and Chen and Chou (Chen, Z.-Y. and Chou, Y.-C., 2000, Computing the noncentral beta distribution with S-system. Computational Statistics and Data Analysis, 33, 343–360.). Statistical densities, cumulative probabilities, quantiles can be evaluated by only one numerical solver power low analysis and simulation (PLAS). With the newly derived S-systems of non-central gamma, the specialized non-central χ2 distributions are demonstrated under five cases in the same three situations studied by Rust and Voit. Both numerical values in pairs are almost equal. Based on these, nine cases in three similar situations are designed for demonstration and evaluation. In addition, exact values in finite significant digits are provided for comparison. Demonstrations are conducted by R package and PLAS solver in the same PC system. By doing these, very accurate and consistent numerical results are obtained by three methods in two groups. On the other hand, these three methods are performed competitively with respect to speed of computation. Numerical advantages of S-systems over the ad hoc approximation and related properties are also discussed. 相似文献