全文获取类型
收费全文 | 7375篇 |
免费 | 679篇 |
国内免费 | 4篇 |
专业分类
管理学 | 1206篇 |
劳动科学 | 1篇 |
民族学 | 5篇 |
人口学 | 58篇 |
丛书文集 | 11篇 |
理论方法论 | 820篇 |
综合类 | 201篇 |
社会学 | 1638篇 |
统计学 | 4118篇 |
出版年
2024年 | 1篇 |
2023年 | 20篇 |
2022年 | 34篇 |
2021年 | 119篇 |
2020年 | 217篇 |
2019年 | 461篇 |
2018年 | 352篇 |
2017年 | 589篇 |
2016年 | 444篇 |
2015年 | 424篇 |
2014年 | 450篇 |
2013年 | 1646篇 |
2012年 | 679篇 |
2011年 | 335篇 |
2010年 | 331篇 |
2009年 | 234篇 |
2008年 | 266篇 |
2007年 | 163篇 |
2006年 | 155篇 |
2005年 | 149篇 |
2004年 | 183篇 |
2003年 | 133篇 |
2002年 | 122篇 |
2001年 | 140篇 |
2000年 | 119篇 |
1999年 | 54篇 |
1998年 | 52篇 |
1997年 | 36篇 |
1996年 | 20篇 |
1995年 | 17篇 |
1994年 | 17篇 |
1993年 | 18篇 |
1992年 | 16篇 |
1991年 | 6篇 |
1990年 | 11篇 |
1989年 | 7篇 |
1988年 | 2篇 |
1987年 | 3篇 |
1986年 | 3篇 |
1985年 | 2篇 |
1984年 | 8篇 |
1983年 | 3篇 |
1982年 | 5篇 |
1981年 | 1篇 |
1980年 | 4篇 |
1979年 | 1篇 |
1978年 | 1篇 |
1976年 | 2篇 |
1975年 | 3篇 |
排序方式: 共有8058条查询结果,搜索用时 15 毫秒
171.
Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to be stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase II and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail. 相似文献
172.
Elevation in C-reactive protein (CRP) is an independent risk factor for cardiovascular disease progression and levels are reduced by treatment with statins. However, on-treatment CRP, given baseline CRP and treatment, is not normally distributed and outliers exist even when transformations are applied. Although classical non-parametric tests address some of these issues, they do not enable straightforward inclusion of covariate information. The aims of this study were to produce a model that improved efficiency and accuracy of analysis of CRP data. Estimation of treatment effects and identification of outliers were addressed using controlled trials of rosuvastatin. The robust statistical technique of MM-estimation was used to fit models to data in the presence of outliers and was compared with least-squares estimation. To develop the model, appropriate transformations of the response and baseline variables were selected. The model was used to investigate how on-treatment CRP related to baseline CRP and estimated treatment effects with rosuvastatin. On comparing least-squares and MM-estimation, MM-estimation was superior to least-squares estimation in that parameter estimates were more efficient and outliers were clearly identified. Relative reductions in CRP were higher at higher baseline CRP levels. There was also evidence of a dose-response relationship between CRP reductions from baseline and rosuvastatin. Several large outliers were identified, although there did not appear to be any relationships between the incidence of outliers and treatments. In conclusion, using robust estimation to model CRP data is superior to least-squares estimation and non-parametric tests in terms of efficiency, outlier identification and the ability to include covariate information. 相似文献
173.
We proposed a modification to the variant of link-tracing sampling suggested by Félix-Medina and Thompson [M.H. Félix-Medina, S.K. Thompson, Combining cluster sampling and link-tracing sampling to estimate the size of hidden populations, Journal of Official Statistics 20 (2004) 19–38] that allows the researcher to have certain control of the final sample size, precision of the estimates or other characteristics of the sample that the researcher is interested in controlling. We achieve this goal by selecting an initial sequential sample of sites instead of an initial simple random sample of sites as those authors suggested. We estimate the population size by means of the maximum likelihood estimators suggested by the above-mentioned authors or by the Bayesian estimators proposed by Félix-Medina and Monjardin [M.H. Félix-Medina, P.E. Monjardin, Combining link-tracing sampling and cluster sampling to estimate the size of hidden populations: A Bayesian-assisted approach, Survey Methodology 32 (2006) 187–195]. Variances are estimated by means of jackknife and bootstrap estimators as well as by the delta estimators proposed in the two above-mentioned papers. Interval estimates of the population size are obtained by means of Wald and bootstrap confidence intervals. The results of an exploratory simulation study indicate good performance of the proposed sampling strategy. 相似文献
174.
There has been increasing use of quality-of-life (QoL) instruments in drug development. Missing item values often occur in QoL data. A common approach to solve this problem is to impute the missing values before scoring. Several imputation procedures, such as imputing with the most correlated item and imputing with a row/column model or an item response model, have been proposed. We examine these procedures using data from two clinical trials, in which the original asthma quality-of-life questionnaire (AQLQ) and the miniAQLQ were used. We propose two modifications to existing procedures: truncating the imputed values to eliminate outliers and using the proportional odds model as the item response model for imputation. We also propose a novel imputation method based on a semi-parametric beta regression so that the imputed value is always in the correct range and illustrate how this approach can easily be implemented in commonly used statistical software. To compare these approaches, we deleted 5% of item values in the data according to three different missingness mechanisms, imputed them using these approaches and compared the imputed values with the true values. Our comparison showed that the row/column-model-based imputation with truncation generally performed better, whereas our new approach had better performance under a number scenarios. 相似文献
175.
Gabriel Escarela Luis Carlos Pérez-Ruíz Russell J. Bowater 《Journal of applied statistics》2009,36(6):647-657
A fully parametric first-order autoregressive (AR(1)) model is proposed to analyse binary longitudinal data. By using a discretized version of a copula, the modelling approach allows one to construct separate models for the marginal response and for the dependence between adjacent responses. In particular, the transition model that is focused on discretizes the Gaussian copula in such a way that the marginal is a Bernoulli distribution. A probit link is used to take into account concomitant information in the behaviour of the underlying marginal distribution. Fixed and time-varying covariates can be included in the model. The method is simple and is a natural extension of the AR(1) model for Gaussian series. Since the approach put forward is likelihood-based, it allows interpretations and inferences to be made that are not possible with semi-parametric approaches such as those based on generalized estimating equations. Data from a study designed to reduce the exposure of children to the sun are used to illustrate the methods. 相似文献
176.
First, we propose a new method for estimating the conditional variance in heteroscedasticity regression models. For heavy tailed innovations, this method is in general more efficient than either of the local linear and local likelihood estimators. Secondly, we apply a variance reduction technique to improve the inference for the conditional variance. The proposed methods are investigated through their asymptotic distributions and numerical performances. 相似文献
177.
The estimation of data transformation is very useful to yield response variables satisfying closely a normal linear model. Generalized linear models enable the fitting of models to a wide range of data types. These models are based on exponential dispersion models. We propose a new class of transformed generalized linear models to extend the Box and Cox models and the generalized linear models. We use the generalized linear model framework to fit these models and discuss maximum likelihood estimation and inference. We give a simple formula to estimate the parameter that index the transformation of the response variable for a subclass of models. We also give a simple formula to estimate the rth moment of the original dependent variable. We explore the possibility of using these models to time series data to extend the generalized autoregressive moving average models discussed by Benjamin et al. [Generalized autoregressive moving average models. J. Amer. Statist. Assoc. 98, 214–223]. The usefulness of these models is illustrated in a simulation study and in applications to three real data sets. 相似文献
178.
Modified inference about the mean of the exponential distribution using moving extreme ranked set sampling 总被引:1,自引:1,他引:0
The maximum likelihood estimator (MLE) and the likelihood ratio test (LRT) will be considered for making inference about the
scale parameter of the exponential distribution in case of moving extreme ranked set sampling (MERSS). The MLE and LRT can
not be written in closed form. Therefore, a modification of the MLE using the technique suggested by Maharota and Nanda (Biometrika
61:601–606, 1974) will be considered and this modified estimator will be used to modify the LRT to get a test in closed form
for testing a simple hypothesis against one sided alternatives. The same idea will be used to modify the most powerful test
(MPT) for testing a simple hypothesis versus a simple hypothesis to get a test in closed form for testing a simple hypothesis
against one sided alternatives. Then it appears that the modified estimator is a good competitor of the MLE and the modified
tests are good competitors of the LRT using MERSS and simple random sampling (SRS). 相似文献
179.
180.
Carroll KJ 《Pharmaceutical statistics》2006,5(4):283-293
In oncology, it may not always be possible to evaluate the efficacy of new medicines in placebo-controlled trials. Furthermore, while some newer, biologically targeted anti-cancer treatments may be expected to deliver therapeutic benefit in terms of better tolerability or improved symptom control, they may not always be expected to provide increased efficacy relative to existing therapies. This naturally leads to the use of active-control, non-inferiority trials to evaluate such treatments. In recent evaluations of anti-cancer treatments, the non-inferiority margin has often been defined in terms of demonstrating that at least 50% of the active control effect has been retained by the new drug using methods such as those described by Rothmann et al., Statistics in Medicine 2003; 22:239-264 and Wang and Hung Controlled Clinical Trials 2003; 24:147-155. However, this approach can lead to prohibitively large clinical trials and results in a tendency to dichotomize trial outcome as either 'success' or 'failure' and thus oversimplifies interpretation. With relatively modest modification, these methods can be used to define a stepwise approach to design and analysis. In the first design step, the trial is sized to show indirectly that the new drug would have beaten placebo; in the second analysis step, the probability that the new drug is superior to placebo is assessed and, if sufficiently high in the third and final step, the relative efficacy of the new drug to control is assessed on a continuum of effect retention via an 'effect retention likelihood plot'. This stepwise approach is likely to provide a more complete assessment of relative efficacy so that the value of new treatments can be better judged. 相似文献