全文获取类型
收费全文 | 7846篇 |
免费 | 136篇 |
专业分类
管理学 | 1102篇 |
民族学 | 29篇 |
人才学 | 7篇 |
人口学 | 695篇 |
丛书文集 | 29篇 |
理论方法论 | 768篇 |
综合类 | 76篇 |
社会学 | 3805篇 |
统计学 | 1471篇 |
出版年
2023年 | 47篇 |
2021年 | 55篇 |
2020年 | 115篇 |
2019年 | 173篇 |
2018年 | 194篇 |
2017年 | 275篇 |
2016年 | 199篇 |
2015年 | 139篇 |
2014年 | 195篇 |
2013年 | 1290篇 |
2012年 | 272篇 |
2011年 | 210篇 |
2010年 | 197篇 |
2009年 | 177篇 |
2008年 | 209篇 |
2007年 | 224篇 |
2006年 | 203篇 |
2005年 | 188篇 |
2004年 | 195篇 |
2003年 | 151篇 |
2002年 | 180篇 |
2001年 | 197篇 |
2000年 | 151篇 |
1999年 | 172篇 |
1998年 | 146篇 |
1997年 | 151篇 |
1996年 | 102篇 |
1995年 | 118篇 |
1994年 | 119篇 |
1993年 | 108篇 |
1992年 | 113篇 |
1991年 | 114篇 |
1990年 | 109篇 |
1989年 | 92篇 |
1988年 | 104篇 |
1987年 | 93篇 |
1986年 | 86篇 |
1985年 | 100篇 |
1984年 | 90篇 |
1983年 | 88篇 |
1982年 | 80篇 |
1981年 | 69篇 |
1980年 | 63篇 |
1979年 | 83篇 |
1978年 | 65篇 |
1977年 | 57篇 |
1976年 | 67篇 |
1975年 | 63篇 |
1974年 | 37篇 |
1973年 | 39篇 |
排序方式: 共有7982条查询结果,搜索用时 9 毫秒
411.
412.
In terms of the risk of making a Type I error in evaluating a null hypothesis of equality, requiring two independent confirmatory trials with two‐sided p‐values less than 0.05 is equivalent to requiring one confirmatory trial with two‐sided p‐value less than 0.001 25. Furthermore, the use of a single confirmatory trial is gaining acceptability, with discussion in both ICH E9 and a CPMP Points to Consider document. Given the growing acceptance of this approach, this note provides a formula for the sample size savings that are obtained with the single clinical trial approach depending on the levels of Type I and Type II errors chosen. For two replicate trials each powered at 90%, which corresponds to a single larger trial powered at 81%, an approximate 19% reduction in total sample size is achieved with the single trial approach. Alternatively, a single trial with the same sample size as the total sample size from two smaller trials will have much greater power. For example, in the case where two trials are each powered at 90% for two‐sided α=0.05 yielding an overall power of 81%, a single trial using two‐sided α=0.001 25 would have 91% power. Copyright © 2004 John Wiley & Sons, Ltd. 相似文献
413.
This paper discusses a novel strategy for simulating rare events and an associated Monte Carlo estimation of tail probabilities. Our method uses a system of interacting particles and exploits a Feynman-Kac representation of that system to analyze their fluctuations. Our precise analysis of the variance of a standard multilevel splitting algorithm reveals an opportunity for improvement. This leads to a novel method that relies on adaptive levels and produces, in the limit of an idealized version of the algorithm, estimates with optimal variance. The motivation for this theoretical work comes from problems occurring in watermarking and fingerprinting of digital contents, which represents a new field of applications of rare event simulation techniques. Some numerical results show performance close to the idealized version of our technique for these practical applications. 相似文献
414.
Thall PF Nguyen HQ Wang X Wolff JE 《Journal of statistical planning and inference》2012,142(4):944-955
The problem of comparing several experimental treatments to a standard arises frequently in medical research. Various multi-stage randomized phase II/III designs have been proposed that select one or more promising experimental treatments and compare them to the standard while controlling overall Type I and Type II error rates. This paper addresses phase II/III settings where the joint goals are to increase the average time to treatment failure and control the probability of toxicity while accounting for patient heterogeneity. We are motivated by the desire to construct a feasible design for a trial of four chemotherapy combinations for treating a family of rare pediatric brain tumors. We present a hybrid two-stage design based on two-dimensional treatment effect parameters. A targeted parameter set is constructed from elicited parameter pairs considered to be equally desirable. Bayesian regression models for failure time and the probability of toxicity as functions of treatment and prognostic covariates are used to define two-dimensional covariate-adjusted treatment effect parameter sets. Decisions at each stage of the trial are based on the ratio of posterior probabilities of the alternative and null covariate-adjusted parameter sets. Design parameters are chosen to minimize expected sample size subject to frequentist error constraints. The design is illustrated by application to the brain tumor trial. 相似文献
415.
A finite mixture model using the multivariate t distribution has been shown as a robust extension of normal mixtures. In this paper, we present a Bayesian approach for inference about parameters of t-mixture models. The specifications of prior distributions are weakly informative to avoid causing nonintegrable posterior distributions. We present two efficient EM-type algorithms for computing the joint posterior mode with the observed data and an incomplete future vector as the sample. Markov chain Monte Carlo sampling schemes are also developed to obtain the target posterior distribution of parameters. The advantages of Bayesian approach over the maximum likelihood method are demonstrated via a set of real data. 相似文献
416.
Mohammad Salehi M. George A.F. Seber 《Australian & New Zealand Journal of Statistics》2004,46(3):483-494
Not having a variance estimator is a seriously weak point of a sampling design from a practical perspective. This paper provides unbiased variance estimators for several sampling designs based on inverse sampling, both with and without an adaptive component. It proposes a new design, which is called the general inverse sampling design, that avoids sampling an infeasibly large number of units. The paper provide estimators for this design as well as its adaptive modification. A simple artificial example is used to demonstrate the computations. The adaptive and non‐adaptive designs are compared using simulations based on real data sets. The results indicate that, for appropriate populations, the adaptive version can have a substantial variance reduction compared with the non‐adaptive version. Also, adaptive general inverse sampling with a limitation on the initial sample size has a greater variance reduction than without the limitation. 相似文献
417.
Paul S. Clarke Peter W. F. Smith 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(2):357-368
Summary. Log-linear models for multiway contingency tables where one variable is subject to non-ignorable non-response will often yield boundary solutions, with the probability of non-respondents being classified in some cells of the table estimated as 0. The paper considers the effect of this non-standard behaviour on two methods of interval estimation based on the distribution of the maximum likelihood estimator. The first method relies on the estimator being approximately normally distributed with variance equal to the inverse of the information matrix. It is shown that the information matrix is singular for boundary solutions, but intervals can be calculated after a simple transformation. For the second method, based on the bootstrap, asymptotic results suggest that the coverage properties may be poor for boundary solutions. Both methods are compared with profile likelihood intervals in a simulation study based on data from the British General Election Panel Study. The results of this study indicate that all three methods perform poorly for a parameter of the non-response model, whereas they all perform well for a parameter of the margin model, irrespective of whether or not there is a boundary solution. 相似文献
418.
Statistical agencies have conflicting obligations to protect confidential information provided by respondents to surveys or censuses and to make data available for research and planning activities. When the microdata themselves are to be released, in order to achieve these conflicting objectives, statistical agencies apply statistical disclosure limitation (SDL) methods to the data, such as noise addition, swapping or microaggregation. Some of these methods do not preserve important structure and constraints in the data, such as positivity of some attributes or inequality constraints between attributes. Failure to preserve constraints is not only problematic in terms of data utility, but also may increase disclosure risk.In this paper, we describe a method for SDL that preserves both positivity of attributes and the mean vector and covariance matrix of the original data. The basis of the method is to apply multiplicative noise with the proper, data-dependent covariance structure. 相似文献
419.
Jaffa MA Woolson RF Lipsitz SR 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2011,174(2):387-402
Patients undergoing renal transplantation are prone to graft failure which causes lost of follow-up measures on their blood urea nitrogen and serum creatinine levels. These two outcomes are measured repeatedly over time to assess renal function following transplantation. Loss of follow-up on these bivariate measures results in informative right censoring, a common problem in longitudinal data that should be adjusted for so that valid estimates are obtained. In this study, we propose a bivariate model that jointly models these two longitudinal correlated outcomes and generates population and individual slopes adjusting for informative right censoring using a discrete survival approach. The proposed approach is applied to the clinical dataset of patients who had undergone renal transplantation. A simulation study validates the effectiveness of the approach. 相似文献
420.
Yves F. Atchadé 《Statistics and Computing》2011,21(4):463-473
In empirical Bayes inference one is typically interested in sampling from the posterior distribution of a parameter with a
hyper-parameter set to its maximum likelihood estimate. This is often problematic particularly when the likelihood function
of the hyper-parameter is not available in closed form and the posterior distribution is intractable. Previous works have
dealt with this problem using a multi-step approach based on the EM algorithm and Markov Chain Monte Carlo (MCMC). We propose
a framework based on recent developments in adaptive MCMC, where this problem is addressed more efficiently using a single
Monte Carlo run. We discuss the convergence of the algorithm and its connection with the EM algorithm. We apply our algorithm
to the Bayesian Lasso of Park and Casella (J. Am. Stat. Assoc. 103:681–686, 2008) and on the empirical Bayes variable selection of George and Foster (J. Am. Stat. Assoc. 87:731–747, 2000). 相似文献