全文获取类型
收费全文 | 5246篇 |
免费 | 643篇 |
国内免费 | 2篇 |
专业分类
管理学 | 1123篇 |
民族学 | 6篇 |
人口学 | 49篇 |
丛书文集 | 23篇 |
理论方法论 | 821篇 |
综合类 | 333篇 |
社会学 | 1633篇 |
统计学 | 1903篇 |
出版年
2023年 | 5篇 |
2022年 | 5篇 |
2021年 | 96篇 |
2020年 | 180篇 |
2019年 | 359篇 |
2018年 | 237篇 |
2017年 | 424篇 |
2016年 | 364篇 |
2015年 | 358篇 |
2014年 | 391篇 |
2013年 | 967篇 |
2012年 | 440篇 |
2011年 | 275篇 |
2010年 | 293篇 |
2009年 | 187篇 |
2008年 | 227篇 |
2007年 | 140篇 |
2006年 | 145篇 |
2005年 | 124篇 |
2004年 | 141篇 |
2003年 | 99篇 |
2002年 | 101篇 |
2001年 | 113篇 |
2000年 | 88篇 |
1999年 | 13篇 |
1998年 | 19篇 |
1997年 | 12篇 |
1996年 | 11篇 |
1995年 | 12篇 |
1994年 | 5篇 |
1993年 | 3篇 |
1992年 | 6篇 |
1991年 | 3篇 |
1990年 | 4篇 |
1989年 | 5篇 |
1988年 | 4篇 |
1987年 | 4篇 |
1986年 | 5篇 |
1985年 | 2篇 |
1984年 | 5篇 |
1983年 | 5篇 |
1982年 | 1篇 |
1980年 | 4篇 |
1979年 | 2篇 |
1978年 | 2篇 |
1976年 | 3篇 |
1975年 | 2篇 |
排序方式: 共有5891条查询结果,搜索用时 15 毫秒
121.
In some statistical problems a degree of explicit, prior information is available about the value taken by the parameter of interest, θ say, although the information is much less than would be needed to place a prior density on the parameter's distribution. Often the prior information takes the form of a simple bound, ‘θ > θ1 ’ or ‘θ < θ1 ’, where θ1 is determined by physical considerations or mathematical theory, such as positivity of a variance. A conventional approach to accommodating the requirement that θ > θ1 is to replace an estimator, , of θ by the maximum of and θ1. However, this technique is generally inadequate. For one thing, it does not respect the strictness of the inequality θ > θ1 , which can be critical in interpreting results. For another, it produces an estimator that does not respond in a natural way to perturbations of the data. In this paper we suggest an alternative approach, in which bootstrap aggregation, or bagging, is used to overcome these difficulties. Bagging gives estimators that, when subjected to the constraint θ > θ1 , strictly exceed θ1 except in extreme settings in which the empirical evidence strongly contradicts the constraint. Bagging also reduces estimator variability in the important case for which is close to θ1, and more generally produces estimators that respect the constraint in a smooth, realistic fashion. 相似文献
122.
Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to be stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase II and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail. 相似文献
123.
Elevation in C-reactive protein (CRP) is an independent risk factor for cardiovascular disease progression and levels are reduced by treatment with statins. However, on-treatment CRP, given baseline CRP and treatment, is not normally distributed and outliers exist even when transformations are applied. Although classical non-parametric tests address some of these issues, they do not enable straightforward inclusion of covariate information. The aims of this study were to produce a model that improved efficiency and accuracy of analysis of CRP data. Estimation of treatment effects and identification of outliers were addressed using controlled trials of rosuvastatin. The robust statistical technique of MM-estimation was used to fit models to data in the presence of outliers and was compared with least-squares estimation. To develop the model, appropriate transformations of the response and baseline variables were selected. The model was used to investigate how on-treatment CRP related to baseline CRP and estimated treatment effects with rosuvastatin. On comparing least-squares and MM-estimation, MM-estimation was superior to least-squares estimation in that parameter estimates were more efficient and outliers were clearly identified. Relative reductions in CRP were higher at higher baseline CRP levels. There was also evidence of a dose-response relationship between CRP reductions from baseline and rosuvastatin. Several large outliers were identified, although there did not appear to be any relationships between the incidence of outliers and treatments. In conclusion, using robust estimation to model CRP data is superior to least-squares estimation and non-parametric tests in terms of efficiency, outlier identification and the ability to include covariate information. 相似文献
124.
There has been increasing use of quality-of-life (QoL) instruments in drug development. Missing item values often occur in QoL data. A common approach to solve this problem is to impute the missing values before scoring. Several imputation procedures, such as imputing with the most correlated item and imputing with a row/column model or an item response model, have been proposed. We examine these procedures using data from two clinical trials, in which the original asthma quality-of-life questionnaire (AQLQ) and the miniAQLQ were used. We propose two modifications to existing procedures: truncating the imputed values to eliminate outliers and using the proportional odds model as the item response model for imputation. We also propose a novel imputation method based on a semi-parametric beta regression so that the imputed value is always in the correct range and illustrate how this approach can easily be implemented in commonly used statistical software. To compare these approaches, we deleted 5% of item values in the data according to three different missingness mechanisms, imputed them using these approaches and compared the imputed values with the true values. Our comparison showed that the row/column-model-based imputation with truncation generally performed better, whereas our new approach had better performance under a number scenarios. 相似文献
125.
Most studies of quality improvement deal with ordered categorical data from industrial experiments. Accounting for the ordering of such data plays an important role in effectively determining the optimal factor level of combination. This paper utilizes the correspondence analysis to develop a procedure to improve the ordered categorical response in a multifactor state system based on Taguchi's statistic. Users may find the proposed procedure in this paper to be attractive because we suggest a simple and also popular statistical tool for graphically identifying the really important factors and determining the levels to improve process quality. A case study for optimizing the polysilicon deposition process in a very large-scale integrated circuit is provided to demonstrate the effectiveness of the proposed procedure. 相似文献
126.
In quantitative trait linkage studies using experimental crosses, the conventional normal location-shift model or other parameterizations may be unnecessarily restrictive. We generalize the mapping problem to a genuine nonparametric setup and provide a robust estimation procedure for the situation where the underlying phenotype distributions are completely unspecified. Classical Wilcoxon–Mann–Whitney statistics are employed for point and interval estimation of QTL positions and effects. 相似文献
127.
The data collection process and the inherent population structure are the main causes for clustered data. The observations in a given cluster are correlated, and the magnitude of such correlation is often measured by the intra-cluster correlation coefficient. The intra-cluster correlation can lead to an inflated size of the standard F test in a linear model. In this paper, we propose a solution to this problem. Unlike previous adjustments, our method does not require estimation of the intra-class correlation, which is problematic especially when the number of clusters is small. Our simulation results show that the new method outperforms the existing methods. 相似文献
128.
Nonparametric density estimation in the presence of measurement error is considered. The usual kernel deconvolution estimator
seeks to account for the contamination in the data by employing a modified kernel. In this paper a new approach based on a
weighted kernel density estimator is proposed. Theoretical motivation is provided by the existence of a weight vector that
perfectly counteracts the bias in density estimation without generating an excessive increase in variance. In practice a data
driven method of weight selection is required. Our strategy is to minimize the discrepancy between a standard kernel estimate
from the contaminated data on the one hand, and the convolution of the weighted deconvolution estimate with the measurement
error density on the other hand. We consider a direct implementation of this approach, in which the weights are optimized
subject to sum and non-negativity constraints, and a regularized version in which the objective function includes a ridge-type
penalty. Numerical tests suggest that the weighted kernel estimation can lead to tangible improvements in performance over
the usual kernel deconvolution estimator. Furthermore, weighted kernel estimates are free from the problem of negative estimation
in the tails that can occur when using modified kernels. The weighted kernel approach generalizes to the case of multivariate
deconvolution density estimation in a very straightforward manner. 相似文献
129.
130.
Carroll KJ 《Pharmaceutical statistics》2006,5(4):283-293
In oncology, it may not always be possible to evaluate the efficacy of new medicines in placebo-controlled trials. Furthermore, while some newer, biologically targeted anti-cancer treatments may be expected to deliver therapeutic benefit in terms of better tolerability or improved symptom control, they may not always be expected to provide increased efficacy relative to existing therapies. This naturally leads to the use of active-control, non-inferiority trials to evaluate such treatments. In recent evaluations of anti-cancer treatments, the non-inferiority margin has often been defined in terms of demonstrating that at least 50% of the active control effect has been retained by the new drug using methods such as those described by Rothmann et al., Statistics in Medicine 2003; 22:239-264 and Wang and Hung Controlled Clinical Trials 2003; 24:147-155. However, this approach can lead to prohibitively large clinical trials and results in a tendency to dichotomize trial outcome as either 'success' or 'failure' and thus oversimplifies interpretation. With relatively modest modification, these methods can be used to define a stepwise approach to design and analysis. In the first design step, the trial is sized to show indirectly that the new drug would have beaten placebo; in the second analysis step, the probability that the new drug is superior to placebo is assessed and, if sufficiently high in the third and final step, the relative efficacy of the new drug to control is assessed on a continuum of effect retention via an 'effect retention likelihood plot'. This stepwise approach is likely to provide a more complete assessment of relative efficacy so that the value of new treatments can be better judged. 相似文献