全文获取类型
收费全文 | 750篇 |
免费 | 20篇 |
国内免费 | 4篇 |
专业分类
管理学 | 37篇 |
民族学 | 1篇 |
人口学 | 11篇 |
丛书文集 | 19篇 |
理论方法论 | 17篇 |
综合类 | 318篇 |
社会学 | 39篇 |
统计学 | 332篇 |
出版年
2023年 | 2篇 |
2022年 | 6篇 |
2021年 | 7篇 |
2020年 | 11篇 |
2019年 | 18篇 |
2018年 | 18篇 |
2017年 | 23篇 |
2016年 | 24篇 |
2015年 | 21篇 |
2014年 | 22篇 |
2013年 | 126篇 |
2012年 | 50篇 |
2011年 | 41篇 |
2010年 | 35篇 |
2009年 | 48篇 |
2008年 | 37篇 |
2007年 | 32篇 |
2006年 | 25篇 |
2005年 | 29篇 |
2004年 | 21篇 |
2003年 | 21篇 |
2002年 | 29篇 |
2001年 | 12篇 |
2000年 | 8篇 |
1999年 | 9篇 |
1998年 | 10篇 |
1997年 | 10篇 |
1996年 | 19篇 |
1995年 | 11篇 |
1994年 | 23篇 |
1993年 | 3篇 |
1992年 | 3篇 |
1991年 | 3篇 |
1990年 | 5篇 |
1989年 | 4篇 |
1988年 | 4篇 |
1987年 | 2篇 |
1983年 | 2篇 |
排序方式: 共有774条查询结果,搜索用时 312 毫秒
191.
《Journal of Statistical Computation and Simulation》2012,82(15):3093-3105
ABSTRACTIn economics and government statistics, aggregated data instead of individual level data are usually reported for data confidentiality and for simplicity. In this paper we develop a method of flexibly estimating the probability density function of the population using aggregated data obtained as group averages when individual level data are grouped according to quantile limits. The kernel density estimator has been commonly applied to such data without taking into account the data aggregation process and has been shown to perform poorly. Our method models the quantile function as an integral of the exponential of a spline function and deduces the density function from the quantile function. We match the aggregated data to their theoretical counterpart using least squares, and regularize the estimation by using the squared second derivatives of the density function as the penalty function. A computational algorithm is developed to implement the method. Application to simulated data and US household income survey data show that our penalized spline estimator can accurately recover the density function of the underlying population while the common use of kernel density estimation is severely biased. The method is applied to study the dynamic of China's urban income distribution using published interval aggregated data of 1985–2010. 相似文献
192.
A growth curve analysis is often applied to estimate patterns of changes in a given characteristic of different individuals. It is also used to find out if the variations in the growth rates among individuals are due to effects of certain covariates. In this paper, a random coefficient linear regression model, as a special case of the growth curve analysis, is generalized to accommodate the situation where the set of influential covariates is not known a priori. Two different approaches for seleaing influential covariates (a weighted stepwise selection procedure and a modified version of Rao and Wu’s selection criterion) for the random slope coefficient of a linear regression model with unbalanced data are proposed. Performances of these methods are evaluated by means of Monte-Carlo simulation. In addition, several methods (Maximum Likelihood, Restricted Maximum Likelihood, Pseudo Maximum Likelihood and Method of Moments) for estimating the parameters of the selected model are compared Proposed variable selection schemes and estimators are appliedtotheactualindustrial problem which motivated this investigation. 相似文献
193.
HELMUT FINNER VERONIKA GONTSCHARUK THORSTEN DICKHAUS 《Scandinavian Journal of Statistics》2012,39(2):382-397
Abstract. This paper is concerned with exact control of the false discovery rate (FDR) for step‐up‐down (SUD) tests related to the asymptotically optimal rejection curve (AORC). Since the system of equations and/or constraints for critical values and FDRs is numerically extremely sensitive, existence and computation of valid solutions is a challenging problem. We derive explicit formulas for upper bounds of the FDR and show that under a well‐known monotonicity condition, control of the FDR by a step‐up procedure results in control of the FDR by a corresponding SUD procedure. Various methods for adjusting the AORC to achieve finite FDR control are investigated. Moreover, we introduce alternative FDR bounding curves and study their connection to rejection curves as well as the existence of critical values for exact FDR control with respect to the underlying FDR bounding curve. Finally, we propose an iterative method for the computation of critical values. 相似文献
194.
Sophie Bercu 《Journal of applied statistics》2013,40(6):1333-1348
A dynamic coupled modelling is investigated to take temperature into account in the individual energy consumption forecasting. The objective is both to avoid the inherent complexity of exhaustive SARIMAX models and to take advantage of the usual linear relation between energy consumption and temperature for thermosensitive customers. We first recall some issues related to individual load curves forecasting. Then, we propose and study the properties of a dynamic coupled modelling taking temperature into account as an exogenous contribution and its application to the intraday prediction of energy consumption. Finally, these theoretical results are illustrated on a real individual load curve. The authors discuss the relevance of such an approach and anticipate that it could form a substantial alternative to the commonly used methods for energy consumption forecasting of individual customers. 相似文献
195.
Receiver operating characteristic(ROC)curves are useful for studying the performance of diagnostic tests. ROC curves occur in many fields of applications including psychophysics, quality control and medical diagnostics. In practical situations, often the responses to a diagnostic test are classified into a number of ordered categories. Such data are referred to as ratings data. It is typically assumed that the underlying model is based on a continuous probability distribution. The ROC curve is then constructed from such data using this probability model. Properties of the ROC curve are inherited from the model. Therefore, understanding the role of different probability distributions in ROC modeling is an interesting and important area of research. In this paper the Lomax distribution is considered as a model for ratings data and the corresponding ROC curve is derived. The maximum likelihood estimation procedure for the related parameters is discussed. This procedure is then illustrated in the analysis of a neurological data example. 相似文献
196.
Seoweon Jin Indika Mallawaarachchi 《Journal of Statistical Computation and Simulation》2013,83(10):1964-1980
Given a collection of n curves that are independent realizations of a functional variable, we are interested in finding patterns in the curve data by exploring low-dimensional approximations to the curves. It is assumed that the data curves are noisy samples from the vector space span <texlscub>f 1, …, f m </texlscub>, where f 1, …, f m are unknown functions on the real interval (0, T) with square-integrable derivatives of all orders m or less, and m<n. Ramsay [Principal differential analysis: Data reduction by differential operators, J. R. Statist. Soc. Ser. B 58 (1996), pp. 495–508] first proposed the method of regularized principal differential analysis (PDA) as an alternative to principal component analysis for finding low-dimensional approximations to curves. PDA is based on the following theorem: there exists an annihilating linear differential operator (LDO) ? of order m such that ?f i =0, i=1, …, m [E.A. Coddington and N. Levinson, Theory of Ordinary Differential Equations, McGraw-Hill, New York, 1955, Theorem 6.2]. PDA specifies m, then uses the data to estimate an annihilating LDO. Smooth estimates of the coefficients of the LDO are obtained by minimizing a penalized sum of the squared norm of the residuals. In this context, the residual is that part of the data curve that is not annihilated by the LDO. PDA obtains the smooth low dimensional approximation to the data curves by projecting onto the null space of the estimated annihilating LDO; PDA is thus useful for obtaining low-dimensional approximations to the data curves whether or not the interpretation of the annihilating LDO is intuitive or obvious from the context of the data. This paper extends PDA to allow for the coefficients in the LDO to smoothly depend upon a single continuous covariate. The estimating equations for the coefficients allowing for a continuous covariate are derived; the penalty of Eilers and Marx [Flexible smoothing with B-splines and penalties, Statist. Sci. 11(2) (1996), pp. 89–121] is used to impose smoothness. The results of a small computer simulation study investigating the bias and variance properties of the estimator are reported. 相似文献
197.
Howard Wainer 《The American statistician》2013,67(1):87-91
A mathematics question that was asked on the Preliminary Scholastic Aptitude Test was scored incorrectly. This subsequently was discovered and became the subject of national attention. In this article we examine the data generated by this item from almost 830,000 examinees and find that detailed statistical analysis with even this enormous sample size would not have yielded clues to the blunder. The epistemological and practical consequences of this are also discussed. 相似文献
198.
《Journal of Statistical Computation and Simulation》2012,82(3-4):159-170
Let F(x) and F(x+θ) be log dose-response curves for a standard preparation and a test preparation, respectively, in a parallel quantal bioassay designed to test the relative potency of a drug, toxicant, or some other substance, and suppose the form of F is unknown. Several estimators of the shift parameter θ or relative potency, are compared, including some generalized and trimmed Spearman-Kärber estimators and a non parametric maximum likelihood estimator. Both point and interval estimation are discussed. Some recommendations concerning the choices of estimators are offered. 相似文献
199.
《Journal of the Korean Statistical Society》2014,43(2):161-175
The area under the ROC curve (AUC) can be interpreted as the probability that the classification scores of a diseased subject is larger than that of a non-diseased subject for a randomly sampled pair of subjects. From the perspective of classification, we want to find a way to separate two groups as distinctly as possible via AUC. When the difference of the scores of a marker is small, its impact on classification is less important. Thus, a new diagnostic/classification measure based on a modified area under the ROC curve (mAUC) is proposed, which is defined as a weighted sum of two AUCs, where the AUC with the smaller difference is assigned a lower weight, and vice versa. Using mAUC is robust in the sense that mAUC gets larger as AUC gets larger as long as they are not equal. Moreover, in many diagnostic situations, only a specific range of specificity is of interest. Under normal distributions, we show that if the AUCs of two markers are within similar ranges, the larger mAUC implies the larger partial AUC for a given specificity. This property of mAUC will help to identify the marker with the higher partial AUC, even when the AUCs are similar. Two nonparametric estimates of an mAUC and their variances are given. We also suggest the use of mAUC as the objective function for classification, and the use of the gradient Lasso algorithm for classifier construction and marker selection. Application to simulation datasets and real microarray gene expression datasets show that our method finds a linear classifier with a higher ROC curve than some other existing linear classifiers, especially in the range of low false positive rates. 相似文献
200.
Under the Loewe additivity, constant relative potency between two drugs is a sufficient condition for the two drugs to be additive. Implicit in this condition is that one drug acts like a dilution of the other. Geometrically, it means that the dose‐response curve of one drug is a copy of another that is shifted horizontally by a constant over the log‐dose axis. Such phenomenon is often referred to as parallelism. Thus, testing drug additivity is equivalent to the demonstration of parallelism between two dose‐response curves. Current methods used for testing parallelism are usually based on significance tests for differences between parameters in the dose‐response curves of the monotherapies. A p‐value of less than 0.05 is indicative of non‐parallelism. The p‐value‐based methods, however, may be fundamentally flawed because an increase in either sample size or precision of the assay used to measure drug effect may result in more frequent rejection of parallel lines for a trivial difference. Moreover, similarity (difference) between model parameters does not necessarily translate into the similarity (difference) between the two response curves. As a result, a test may conclude that the model parameters are similar (different), yet there is little assurance on the similarity between the two dose‐response curves. In this paper, we introduce a Bayesian approach to directly test the hypothesis that the two drugs have a constant relative potency. An important utility of our proposed method is in aiding go/no‐go decisions concerning two drug combination studies. It is illustrated with both a simulated example and a real‐life example. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献