全文获取类型
收费全文 | 29041篇 |
免费 | 761篇 |
专业分类
管理学 | 3598篇 |
民族学 | 146篇 |
人才学 | 7篇 |
人口学 | 2666篇 |
丛书文集 | 173篇 |
教育普及 | 4篇 |
理论方法论 | 2705篇 |
现状及发展 | 1篇 |
综合类 | 551篇 |
社会学 | 14555篇 |
统计学 | 5396篇 |
出版年
2023年 | 142篇 |
2021年 | 151篇 |
2020年 | 451篇 |
2019年 | 633篇 |
2018年 | 664篇 |
2017年 | 944篇 |
2016年 | 691篇 |
2015年 | 541篇 |
2014年 | 647篇 |
2013年 | 4859篇 |
2012年 | 959篇 |
2011年 | 847篇 |
2010年 | 642篇 |
2009年 | 633篇 |
2008年 | 706篇 |
2007年 | 738篇 |
2006年 | 647篇 |
2005年 | 723篇 |
2004年 | 673篇 |
2003年 | 613篇 |
2002年 | 619篇 |
2001年 | 722篇 |
2000年 | 639篇 |
1999年 | 639篇 |
1998年 | 479篇 |
1997年 | 446篇 |
1996年 | 470篇 |
1995年 | 434篇 |
1994年 | 434篇 |
1993年 | 398篇 |
1992年 | 483篇 |
1991年 | 485篇 |
1990年 | 425篇 |
1989年 | 445篇 |
1988年 | 407篇 |
1987年 | 376篇 |
1986年 | 371篇 |
1985年 | 390篇 |
1984年 | 415篇 |
1983年 | 409篇 |
1982年 | 320篇 |
1981年 | 273篇 |
1980年 | 291篇 |
1979年 | 313篇 |
1978年 | 261篇 |
1977年 | 243篇 |
1976年 | 211篇 |
1975年 | 203篇 |
1974年 | 181篇 |
1973年 | 156篇 |
排序方式: 共有10000条查询结果,搜索用时 515 毫秒
881.
Nolan A. Wages Alexia Iasonos John O'Quigley Mark R. Conaway 《Pharmaceutical statistics》2020,19(2):137-144
This paper studies the notion of coherence in interval‐based dose‐finding methods. An incoherent decision is either (a) a recommendation to escalate the dose following an observed dose‐limiting toxicity or (b) a recommendation to deescalate the dose following a non–dose‐limiting toxicity. In a simulated example, we illustrate that the Bayesian optimal interval method and the Keyboard method are not coherent. We generated dose‐limiting toxicity outcomes under an assumed set of true probabilities for a trial of n=36 patients in cohorts of size 1, and we counted the number of incoherent dosing decisions that were made throughout this simulated trial. Each of the methods studied resulted in 13/36 (36%) incoherent decisions in the simulated trial. Additionally, for two different target dose‐limiting toxicity rates, 20% and 30%, and a sample size of n=30 patients, we randomly generated 100 dose‐toxicity curves and tabulated the number of incoherent decisions made by each method in 1000 simulated trials under each curve. For each method studied, the probability of incurring at least one incoherent decision during the conduct of a single trial is greater than 75%. Coherency is an important principle in the conduct of dose‐finding trials. Interval‐based methods violate this principle for cohorts of size 1 and require additional modifications to overcome this shortcoming. Researchers need to take a closer look at the dose assignment behavior of interval‐based methods when using them to plan dose‐finding studies. 相似文献
882.
Sharon Varghese A 《统计学通讯:理论与方法》2020,49(12):3026-3043
AbstractIn literature, Lindley distribution is considered as an alternative to exponential distribution to fit lifetime data. In the present work, a Lindley step-stress model with independent causes of failure is proposed. An algorithm to generate random samples from the proposed model under type 1 censoring scheme is developed. Point and interval estimation of the model parameters is carried out using maximum likelihood method and percentile bootstrap approach. To understand the effectiveness of the resulting estimates, numerical illustration is provided based on simulated and real-life data sets. 相似文献
883.
Andressa A. Sleiman Nicholas Matey 《Journal of Organizational Behavior Management》2020,40(1-2):82-92
ABSTRACT Over the years, reviews in behavior analysis have sought to identify the most prolific researchers and institutions. The goal of these reviews was to offer one resource for behavior analysts to identify experts in behavior analysis and quality graduate programs. However, most of these reviews omitted the Journal of Organizational Behavior Management (JOBM), making the results less relevant to those who work in organizational behavior management. The purpose of this review was to extend previous findings and identify the most published researchers, academic institutions, and organizations in JOBM since its inception. Furthermore, we calculated the citation rate (per year, per article) for each of the top 20 most published authors. The results, implications, and opportunities for further analysis are discussed. 相似文献
884.
In randomized clinical trials, the log rank test is often used to test the null hypothesis of the equality of treatment-specific survival distributions. In observational studies, however, the ordinary log rank test is no longer guaranteed to be valid. In such studies we must be cautious about potential confounders; that is, the covariates that affect both the treatment assignment and the survival distribution. In this paper, two cases were considered: the first is when it is believed that all the potential confounders are captured in the primary database, and the second case where a substudy is conducted to capture additional confounding covariates. We generalize the augmented inverse probability weighted complete case estimators for treatment-specific survival distribution proposed in Bai et al. (Biometrics 69:830–839, 2013) and develop the log rank type test in both cases. The consistency and double robustness of the proposed test statistics are shown in simulation studies. These statistics are then applied to the data from the observational study that motivated this research. 相似文献
885.
Tree algorithms are a well-known class of random access algorithms with a provable maximum stable throughput under the infinite population model (as opposed to ALOHA or the binary exponential backoff algorithm). In this article, we propose a tree algorithm for opportunistic spectrum usage in cognitive radio networks. A channel in such a network is shared among so-called primary and secondary users, where the secondary users are allowed to use the channel only if there is no primary user activity. The tree algorithm designed in this article can be used by the secondary users to share the channel capacity left by the primary users.
We analyze the maximum stable throughput and mean packet delay of the secondary users by developing a tree structured Quasi-Birth Death Markov chain under the assumption that the primary user activity can be modeled by means of a finite state Markov chain and that packets lengths follow a discrete phase-type distribution.
Numerical experiments provide insight on the effect of various system parameters and indicate that the proposed algorithm is able to make good use of the bandwidth left by the primary users. 相似文献
886.
Shaun R. Seaman Daniel Farewell Ian R. White 《Scandinavian Journal of Statistics》2016,43(4):996-1018
Linear increments (LI) are used to analyse repeated outcome data with missing values. Previously, two LI methods have been proposed, one allowing non‐monotone missingness but not independent measurement error and one allowing independent measurement error but only monotone missingness. In both, it was suggested that the expected increment could depend on current outcome. We show that LI can allow non‐monotone missingness and either independent measurement error of unknown variance or dependence of expected increment on current outcome but not both. A popular alternative to LI is a multivariate normal model ignoring the missingness pattern. This gives consistent estimation when data are normally distributed and missing at random (MAR). We clarify the relation between MAR and the assumptions of LI and show that for continuous outcomes multivariate normal estimators are also consistent under (non‐MAR and non‐normal) assumptions not much stronger than those of LI. Moreover, when missingness is non‐monotone, they are typically more efficient. 相似文献
887.
This article investigates the choice of working covariance structures in the analysis of spatially correlated observations motivated by cardiac imaging data. Through Monte Carlo simulations, we found that the choice of covariance structure affects the efficiency of the estimator and power of the test. Choosing the popular unstructured working covariance structure results in an over-inflated Type I error possibly due to a sample size not large enough relative to the number of parameters being estimated. With regard to model fit indices, Bayesian Information Criterion outperforms Akaike Information Criterion in choosing the correct covariance structure used to generate data. 相似文献
888.
In this article, we use cumulative residual Kullback-Leibler information (CRKL) and cumulative Kullback-Leibler information (CKL) to construct two goodness-of-fit test statistics for testing exponentiality with progressively Type-II censored data. The power of the proposed tests are compared with the power of goodness-of-fit test for exponentiality introduced by Balakrishnan et al. (2007). We show that when the hazard function of the alternative is monotone decreasing, the test based on CRKL has higher power and when the hazard function of the alternative is non-monotone, the test based on CKL has higher power. But, when it is monotone increasing the power difference between test based on CKL and their proposed test is not so remarkable. The use of the proposed tests is shown in an illustrative example. 相似文献
889.
890.
The binary logistic regression is a widely used statistical method when the dependent variable has two categories. In most of the situations of logistic regression, independent variables are collinear which is called the multicollinearity problem. It is known that multicollinearity affects the variance of maximum likelihood estimator (MLE) negatively. Therefore, this article introduces new shrinkage parameters for the Liu-type estimators in the Liu (2003) in the logistic regression model defined by Huang (2012) in order to decrease the variance and overcome the problem of multicollinearity. A Monte Carlo study is designed to show the goodness of the proposed estimators over MLE in the sense of mean squared error (MSE) and mean absolute error (MAE). Moreover, a real data case is given to demonstrate the advantages of the new shrinkage parameters. 相似文献