首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   17091篇
  免费   597篇
  国内免费   227篇
管理学   1902篇
劳动科学   2篇
民族学   67篇
人才学   3篇
人口学   323篇
丛书文集   914篇
理论方法论   380篇
综合类   8292篇
社会学   539篇
统计学   5493篇
  2024年   26篇
  2023年   131篇
  2022年   226篇
  2021年   253篇
  2020年   373篇
  2019年   475篇
  2018年   531篇
  2017年   679篇
  2016年   571篇
  2015年   581篇
  2014年   933篇
  2013年   2223篇
  2012年   1225篇
  2011年   1090篇
  2010年   886篇
  2009年   875篇
  2008年   972篇
  2007年   948篇
  2006年   877篇
  2005年   758篇
  2004年   638篇
  2003年   563篇
  2002年   521篇
  2001年   426篇
  2000年   266篇
  1999年   191篇
  1998年   103篇
  1997年   113篇
  1996年   82篇
  1995年   69篇
  1994年   50篇
  1993年   42篇
  1992年   42篇
  1991年   42篇
  1990年   31篇
  1989年   21篇
  1988年   17篇
  1987年   9篇
  1986年   7篇
  1985年   13篇
  1984年   8篇
  1983年   9篇
  1982年   5篇
  1981年   1篇
  1980年   1篇
  1979年   4篇
  1978年   3篇
  1977年   1篇
  1976年   1篇
  1975年   3篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
91.
There has been increasing use of quality-of-life (QoL) instruments in drug development. Missing item values often occur in QoL data. A common approach to solve this problem is to impute the missing values before scoring. Several imputation procedures, such as imputing with the most correlated item and imputing with a row/column model or an item response model, have been proposed. We examine these procedures using data from two clinical trials, in which the original asthma quality-of-life questionnaire (AQLQ) and the miniAQLQ were used. We propose two modifications to existing procedures: truncating the imputed values to eliminate outliers and using the proportional odds model as the item response model for imputation. We also propose a novel imputation method based on a semi-parametric beta regression so that the imputed value is always in the correct range and illustrate how this approach can easily be implemented in commonly used statistical software. To compare these approaches, we deleted 5% of item values in the data according to three different missingness mechanisms, imputed them using these approaches and compared the imputed values with the true values. Our comparison showed that the row/column-model-based imputation with truncation generally performed better, whereas our new approach had better performance under a number scenarios.  相似文献   
92.
This paper is mainly concerned with minimax estimation in the general linear regression model y=Xβ+εy=Xβ+ε under ellipsoidal restrictions on the parameter space and quadratic loss function. We confine ourselves to estimators that are linear in the response vector y  . The minimax estimators of the regression coefficient ββ are derived under homogeneous condition and heterogeneous condition, respectively. Furthermore, these obtained estimators are the ridge-type estimators and mean dispersion error (MDE) superior to the best linear unbiased estimator b=(XW-1X)-1XW-1yb=(XW-1X)-1XW-1y under some conditions.  相似文献   
93.
The estimation of data transformation is very useful to yield response variables satisfying closely a normal linear model. Generalized linear models enable the fitting of models to a wide range of data types. These models are based on exponential dispersion models. We propose a new class of transformed generalized linear models to extend the Box and Cox models and the generalized linear models. We use the generalized linear model framework to fit these models and discuss maximum likelihood estimation and inference. We give a simple formula to estimate the parameter that index the transformation of the response variable for a subclass of models. We also give a simple formula to estimate the rrth moment of the original dependent variable. We explore the possibility of using these models to time series data to extend the generalized autoregressive moving average models discussed by Benjamin et al. [Generalized autoregressive moving average models. J. Amer. Statist. Assoc. 98, 214–223]. The usefulness of these models is illustrated in a simulation study and in applications to three real data sets.  相似文献   
94.
A fully parametric first-order autoregressive (AR(1)) model is proposed to analyse binary longitudinal data. By using a discretized version of a copula, the modelling approach allows one to construct separate models for the marginal response and for the dependence between adjacent responses. In particular, the transition model that is focused on discretizes the Gaussian copula in such a way that the marginal is a Bernoulli distribution. A probit link is used to take into account concomitant information in the behaviour of the underlying marginal distribution. Fixed and time-varying covariates can be included in the model. The method is simple and is a natural extension of the AR(1) model for Gaussian series. Since the approach put forward is likelihood-based, it allows interpretations and inferences to be made that are not possible with semi-parametric approaches such as those based on generalized estimating equations. Data from a study designed to reduce the exposure of children to the sun are used to illustrate the methods.  相似文献   
95.
An approach to the analysis of time-dependent ordinal quality score data from robust design experiments is developed and applied to an experiment from commercial horticultural research, using concepts of product robustness and longevity that are familiar to analysts in engineering research. A two-stage analysis is used to develop models describing the effects of a number of experimental treatments on the rate of post-sales product quality decline. The first stage uses a polynomial function on a transformed scale to approximate the quality decline for an individual experimental unit using derived coefficients and the second stage uses a joint mean and dispersion model to investigate the effects of the experimental treatments on these derived coefficients. The approach, developed specifically for an application in horticulture, is exemplified with data from a trial testing ornamental plants that are subjected to a range of treatments during production and home-life. The results of the analysis show how a number of control and noise factors affect the rate of post-production quality decline. Although the model is used to analyse quality data from a trial on ornamental plants, the approach developed is expected to be more generally applicable to a wide range of other complex production systems.  相似文献   
96.
Donor imputation is frequently used in surveys. However, very few variance estimation methods that take into account donor imputation have been developed in the literature. This is particularly true for surveys with high sampling fractions using nearest donor imputation, often called nearest‐neighbour imputation. In this paper, the authors develop a variance estimator for donor imputation based on the assumption that the imputed estimator of a domain total is approximately unbiased under an imputation model; that is, a model for the variable requiring imputation. Their variance estimator is valid, irrespective of the magnitude of the sampling fractions and the complexity of the donor imputation method, provided that the imputation model mean and variance are accurately estimated. They evaluate its performance in a simulation study and show that nonparametric estimation of the model mean and variance via smoothing splines brings robustness with respect to imputation model misspecifications. They also apply their variance estimator to real survey data when nearest‐neighbour imputation has been used to fill in the missing values. The Canadian Journal of Statistics 37: 400–416; 2009 © 2009 Statistical Society of Canada  相似文献   
97.
In analogy with the cumulative residual entropy recently proposed by Wang et al. [2003a. A new and robust information theoretic measure and its application to image alignment. In: Information Processing in Medical Imaging. Lecture Notes in Computer Science, vol. 2732, Springer, Heidelberg, pp. 388–400; 2003b. Cumulative residual entropy, a new measure of information and its application to image alignment. In: Proceedings on the Ninth IEEE International Conference on Computer Vision (ICCV’03), vol. 1, IEEE Computer Society Press, Silver Spring, MD, pp. 548–553], we introduce and study the cumulative entropy, which is a new measure of information alternative to the classical differential entropy. We show that the cumulative entropy of a random lifetime X can be expressed as the expectation of its mean inactivity time evaluated at X. Hence, our measure is particularly suitable to describe the information in problems related to ageing properties of reliability theory based on the past and on the inactivity times. Our results include various bounds to the cumulative entropy, its connection to the proportional reversed hazards model, and the study of its dynamic version that is shown to be increasing if the mean inactivity time is increasing. The empirical cumulative entropy is finally proposed to estimate the new information measure.  相似文献   
98.
In a sample of censored survival times, the presence of an immune proportion of individuals who are not subject to death, failure or relapse, may be indicated by a relatively high number of individuals with large censored survival times. In this paper the generalized log-gamma model is modified for the possibility that long-term survivors may be present in the data. The model attempts to separately estimate the effects of covariates on the surviving fraction, that is, the proportion of the population for which the event never occurs. The logistic function is used for the regression model of the surviving fraction. Inference for the model parameters is considered via maximum likelihood. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. Finally, a data set from the medical area is analyzed under the log-gamma generalized mixture model. A residual analysis is performed in order to select an appropriate model. The authors would like to thank the editor and referees for their helpful comments. This work was supported by CNPq, Brazil.  相似文献   
99.
Summary.  We propose a flexible generalized auto-regressive conditional heteroscedasticity type of model for the prediction of volatility in financial time series. The approach relies on the idea of using multivariate B -splines of lagged observations and volatilities. Estimation of such a B -spline basis expansion is constructed within the likelihood framework for non-Gaussian observations. As the dimension of the B -spline basis is large, i.e. many parameters, we use regularized and sparse model fitting with a boosting algorithm. Our method is computationally attractive and feasible for large dimensions. We demonstrate its strong predictive potential for financial volatility on simulated and real data, and also in comparison with other approaches, and we present some supporting asymptotic arguments.  相似文献   
100.
Abstract.  Collapsibility means that the same statistical result of interest can be obtained before and after marginalization over some variables. In this paper, we discuss three kinds of collapsibility for directed acyclic graphs (DAGs): estimate collapsibility, conditional independence collapsibility and model collapsibility. Related to collapsibility, we discuss removability of variables from a DAG. We present conditions for these three different kinds of collapsibility and relationships among them. We give algorithms to find a minimum variable set containing a variable subset of interest onto which a statistical result is collapsible.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号