首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5705篇
  免费   691篇
  国内免费   20篇
管理学   1194篇
民族学   22篇
人口学   93篇
丛书文集   133篇
理论方法论   1014篇
综合类   842篇
社会学   2118篇
统计学   1000篇
  2024年   7篇
  2023年   37篇
  2022年   9篇
  2021年   114篇
  2020年   201篇
  2019年   374篇
  2018年   263篇
  2017年   397篇
  2016年   393篇
  2015年   377篇
  2014年   427篇
  2013年   842篇
  2012年   463篇
  2011年   323篇
  2010年   319篇
  2009年   220篇
  2008年   263篇
  2007年   171篇
  2006年   160篇
  2005年   159篇
  2004年   157篇
  2003年   162篇
  2002年   186篇
  2001年   173篇
  2000年   130篇
  1999年   21篇
  1998年   13篇
  1997年   11篇
  1996年   11篇
  1995年   8篇
  1994年   6篇
  1993年   7篇
  1992年   2篇
  1990年   1篇
  1989年   4篇
  1988年   3篇
  1984年   1篇
  1981年   1篇
排序方式: 共有6416条查询结果,搜索用时 15 毫秒
61.
The authors develop consistent nonparametric estimation techniques for the directional mixing density. Classical spherical harmonics are used to adapt Euclidean techniques to this directional environment. Minimax rates of convergence are obtained for rotation ally invariant densities verifying various smoothness conditions. It is found that the differences in smoothness between the Laplace, the Gaussian and the von Mises‐Fisher distributions lead to contrasting inferential conclusions.  相似文献   
62.
Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to be stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase II and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail.  相似文献   
63.
Elevation in C-reactive protein (CRP) is an independent risk factor for cardiovascular disease progression and levels are reduced by treatment with statins. However, on-treatment CRP, given baseline CRP and treatment, is not normally distributed and outliers exist even when transformations are applied. Although classical non-parametric tests address some of these issues, they do not enable straightforward inclusion of covariate information. The aims of this study were to produce a model that improved efficiency and accuracy of analysis of CRP data. Estimation of treatment effects and identification of outliers were addressed using controlled trials of rosuvastatin. The robust statistical technique of MM-estimation was used to fit models to data in the presence of outliers and was compared with least-squares estimation. To develop the model, appropriate transformations of the response and baseline variables were selected. The model was used to investigate how on-treatment CRP related to baseline CRP and estimated treatment effects with rosuvastatin. On comparing least-squares and MM-estimation, MM-estimation was superior to least-squares estimation in that parameter estimates were more efficient and outliers were clearly identified. Relative reductions in CRP were higher at higher baseline CRP levels. There was also evidence of a dose-response relationship between CRP reductions from baseline and rosuvastatin. Several large outliers were identified, although there did not appear to be any relationships between the incidence of outliers and treatments. In conclusion, using robust estimation to model CRP data is superior to least-squares estimation and non-parametric tests in terms of efficiency, outlier identification and the ability to include covariate information.  相似文献   
64.
There has been increasing use of quality-of-life (QoL) instruments in drug development. Missing item values often occur in QoL data. A common approach to solve this problem is to impute the missing values before scoring. Several imputation procedures, such as imputing with the most correlated item and imputing with a row/column model or an item response model, have been proposed. We examine these procedures using data from two clinical trials, in which the original asthma quality-of-life questionnaire (AQLQ) and the miniAQLQ were used. We propose two modifications to existing procedures: truncating the imputed values to eliminate outliers and using the proportional odds model as the item response model for imputation. We also propose a novel imputation method based on a semi-parametric beta regression so that the imputed value is always in the correct range and illustrate how this approach can easily be implemented in commonly used statistical software. To compare these approaches, we deleted 5% of item values in the data according to three different missingness mechanisms, imputed them using these approaches and compared the imputed values with the true values. Our comparison showed that the row/column-model-based imputation with truncation generally performed better, whereas our new approach had better performance under a number scenarios.  相似文献   
65.
The author considers estimation under a Gamma process model for degradation data. The setting for degradation data is one in which n independent units, each with a Gamma process with a common shape function and scale parameter, are observed at several possibly different times. Covariates can be incorporated into the model by taking the scale parameter as a function of the covariates. The author proposes using the maximum pseudo‐likelihood method to estimate the unknown parameters. The method requires usage of the Pool Adjacent Violators Algorithm. Asymptotic properties, including consistency, convergence rate and asymptotic distribution, are established. Simulation studies are conducted to validate the method and its application is illustrated by using bridge beams data and carbon‐film resistors data. The Canadian Journal of Statistics 37: 102‐118; 2009 © 2009 Statistical Society of Canada  相似文献   
66.
Donor imputation is frequently used in surveys. However, very few variance estimation methods that take into account donor imputation have been developed in the literature. This is particularly true for surveys with high sampling fractions using nearest donor imputation, often called nearest‐neighbour imputation. In this paper, the authors develop a variance estimator for donor imputation based on the assumption that the imputed estimator of a domain total is approximately unbiased under an imputation model; that is, a model for the variable requiring imputation. Their variance estimator is valid, irrespective of the magnitude of the sampling fractions and the complexity of the donor imputation method, provided that the imputation model mean and variance are accurately estimated. They evaluate its performance in a simulation study and show that nonparametric estimation of the model mean and variance via smoothing splines brings robustness with respect to imputation model misspecifications. They also apply their variance estimator to real survey data when nearest‐neighbour imputation has been used to fill in the missing values. The Canadian Journal of Statistics 37: 400–416; 2009 © 2009 Statistical Society of Canada  相似文献   
67.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion.  相似文献   
68.
Abstract. We consider a stochastic process driven by diffusions and jumps. Given a discrete record of observations, we devise a technique for identifying the times when jumps larger than a suitably defined threshold occurred. This allows us to determine a consistent non‐parametric estimator of the integrated volatility when the infinite activity jump component is Lévy. Jump size estimation and central limit results are proved in the case of finite activity jumps. Some simulations illustrate the applicability of the methodology in finite samples and its superiority on the multipower variations especially when it is not possible to use high frequency data.  相似文献   
69.
Anthropogenic climate change information tends to be interpreted against the backdrop of initial environmental beliefs, which can lead to some people being resistant toward the information. In this article (N = 88), we examined whether self‐affirmation via reflection on personally important values could attenuate the impact of initial beliefs on the acceptance of anthropogenic climate change evidence. Our findings showed that initial beliefs about the human impact on ecological stability influenced the acceptance of information only among nonaffirmed participants. Self‐affirmed participants who were initially resistant toward the information showed stronger beliefs in the existence of climate change risks and greater acknowledgment that individual efficacy has a role to play in reducing climate change risks than did their nonaffirmed counterparts.  相似文献   
70.
In risk assessment, the moment‐independent sensitivity analysis (SA) technique for reducing the model uncertainty has attracted a great deal of attention from analysts and practitioners. It aims at measuring the relative importance of an individual input, or a set of inputs, in determining the uncertainty of model output by looking at the entire distribution range of model output. In this article, along the lines of Plischke et al., we point out that the original moment‐independent SA index (also called delta index) can also be interpreted as the dependence measure between model output and input variables, and introduce another moment‐independent SA index (called extended delta index) based on copula. Then, nonparametric methods for estimating the delta and extended delta indices are proposed. Both methods need only a set of samples to compute all the indices; thus, they conquer the problem of the “curse of dimensionality.” At last, an analytical test example, a risk assessment model, and the levelE model are employed for comparing the delta and the extended delta indices and testing the two calculation methods. Results show that the delta and the extended delta indices produce the same importance ranking in these three test examples. It is also shown that these two proposed calculation methods dramatically reduce the computational burden.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号