全文获取类型
收费全文 | 7197篇 |
免费 | 182篇 |
国内免费 | 50篇 |
专业分类
管理学 | 278篇 |
劳动科学 | 1篇 |
民族学 | 94篇 |
人才学 | 1篇 |
人口学 | 135篇 |
丛书文集 | 655篇 |
理论方法论 | 254篇 |
综合类 | 4309篇 |
社会学 | 578篇 |
统计学 | 1124篇 |
出版年
2024年 | 21篇 |
2023年 | 80篇 |
2022年 | 45篇 |
2021年 | 79篇 |
2020年 | 120篇 |
2019年 | 121篇 |
2018年 | 212篇 |
2017年 | 312篇 |
2016年 | 172篇 |
2015年 | 180篇 |
2014年 | 329篇 |
2013年 | 824篇 |
2012年 | 449篇 |
2011年 | 365篇 |
2010年 | 360篇 |
2009年 | 333篇 |
2008年 | 353篇 |
2007年 | 460篇 |
2006年 | 506篇 |
2005年 | 462篇 |
2004年 | 396篇 |
2003年 | 419篇 |
2002年 | 300篇 |
2001年 | 270篇 |
2000年 | 143篇 |
1999年 | 42篇 |
1998年 | 17篇 |
1997年 | 10篇 |
1996年 | 8篇 |
1995年 | 9篇 |
1994年 | 8篇 |
1993年 | 4篇 |
1992年 | 6篇 |
1991年 | 2篇 |
1990年 | 2篇 |
1988年 | 4篇 |
1987年 | 1篇 |
1986年 | 1篇 |
1984年 | 1篇 |
1983年 | 1篇 |
1981年 | 1篇 |
1980年 | 1篇 |
排序方式: 共有7429条查询结果,搜索用时 515 毫秒
991.
The study of the dependence between two medical diagnostic tests is an important issue in health research since it can modify the diagnosis and, therefore, the decision regarding a therapeutic treatment for an individual. In many practical situations, the diagnostic procedure includes the use of two tests, with outcomes on a continuous scale. For final classification, usually there is an additional “gold standard” or reference test. Considering binary test responses, we usually assume independence between tests or a joint binary structure for dependence. In this article, we introduce a simulation study assuming two dependent dichotomized tests using two copula function dependence structures in the presence or absence of verification bias. We compare the test parameter estimators obtained under copula structure dependence with those obtained assuming binary dependence or assuming independent tests. 相似文献
992.
The article concerns tests for normality based on the Shapiro–Wilk W statistic. The constants in the test statistic are recalculated as those given in Shapiro and Wilk are incorrect. The empirical significance levels and power of improved tests have been evaluated in simulation study and compared to original ones. The improved tests were also applied to the multivariate case. In this case, we consider two implementations of the W statistic, the first one proposed by Srivastava and Hui and the other by Hanusz and Tarasinska. Empirical size of tests and their power have been compared to the Henze–Zirkler test. 相似文献
993.
The existing process capability indices (PCI's) assume that the distribution of the process being investigated is normal. For non-normal distributions, PCI's become unreliable in that PCI's may indicate the process is capable when in fact it is not. In this paper, we propose a new index which can be applied to any distribution. The proposed indexCf:, is directly related to the probability of non-conformance of the process. For a given random sample, the estimation of Cf boils down to estimating non-parametrically the tail probabilities of an unknown distribution. The approach discussed in this paper is based on the works by Pickands (1975) and Smith (1987). We also discuss the construction of bootstrap confidence intervals of Cf: based on the so-called accelerated bias correction method (BC a:). Several simulations are carried out to demonstrate the flexibility and applicability of Cf:. Two real life data sets are analyzed using the proposed index. 相似文献
994.
《统计学通讯:模拟与计算》2013,42(3):677-696
Abstract We develop a Bayesian statistical model for estimating bowhead whale population size from photo-identification data when most of the population is uncatchable. The proposed conditional likelihood function is a product of Darroch's model, formulated as a function of the number of good photos, and a binomial distribution of captured whales given the total number of good photos at each occasion. The full Bayesian model is implemented via adaptive rejection sampling for log concave densities. We apply the model to data from 1985 and 1986 bowhead whale photographic studies and the results compare favorably with the ones obtained in the literature. Also, a comparison with the maximum likelihood procedure with bootstrap simulation is considered using different vague priors for the capture probabilities. 相似文献
995.
《统计学通讯:模拟与计算》2013,42(3):799-833
Abstract In a quantitative linear model with errors following a stationary Gaussian, first-order autoregressive or AR(1) process, Generalized Least Squares (GLS) on raw data and Ordinary Least Squares (OLS) on prewhitened data are efficient methods of estimation of the slope parameters when the autocorrelation parameter of the error AR(1) process, ρ, is known. In practice, ρ is generally unknown. In the so-called two-stage estimation procedures, ρ is then estimated first before using the estimate of ρ to transform the data and estimate the slope parameters by OLS on the transformed data. Different estimators of ρ have been considered in previous studies. In this article, we study nine two-stage estimation procedures for their efficiency in estimating the slope parameters. Six of them (i.e., three noniterative, three iterative) are based on three estimators of ρ that have been considered previously. Two more (i.e., one noniterative, one iterative) are based on a new estimator of ρ that we propose: it is provided by the sample autocorrelation coefficient of the OLS residuals at lag 1, denoted r(1). Lastly, REstricted Maximum Likelihood (REML) represents a different type of two-stage estimation procedure whose efficiency has not been compared to the others yet. We also study the validity of the testing procedures derived from GLS and the nine two-stage estimation procedures. Efficiency and validity are analyzed in a Monte Carlo study. Three types of explanatory variable x in a simple quantitative linear model with AR(1) errors are considered in the time domain: Case 1, x is fixed; Case 2, x is purely random; and Case 3, x follows an AR(1) process with the same autocorrelation parameter value as the error AR(1) process. In a preliminary step, the number of inadmissible estimates and the efficiency of the different estimators of ρ are compared empirically, whereas their approximate expected value in finite samples and their asymptotic variance are derived theoretically. Thereafter, the efficiency of the estimation procedures and the validity of the derived testing procedures are discussed in terms of the sample size and the magnitude and sign of ρ. The noniterative two-stage estimation procedure based on the new estimator of ρ is shown to be more efficient for moderate values of ρ at small sample sizes. With the exception of small sample sizes, REML and its derived F-test perform the best overall. The asymptotic equivalence of two-stage estimation procedures, besides REML, is observed empirically. Differences related to the nature, fixed or random (uncorrelated or autocorrelated), of the explanatory variable are also discussed. 相似文献
996.
Ken-Ichi Koike 《统计学通讯:理论与方法》2013,42(12):2185-2195
In this article, some results are derived on stochastic comparisons of the residual and past lifetimes of an (n ? k + 1)-out-of-n system with dependent components. These findings generalize some recent results obtained on systems with independent components and provide some interesting results for a system with dependent components following an Archimedean copula. An illustrative example is provided. 相似文献
997.
The problem of making statistical inference about θ =P(X > Y) has been under great investigation in the literature using simple random sampling (SRS) data. This problem arises naturally in the area of reliability for a system with strength X and stress Y. In this study, we will consider making statistical inference about θ using ranked set sampling (RSS) data. Several estimators are proposed to estimate θ using RSS. The properties of these estimators are investigated and compared with known estimators based on simple random sample (SRS) data. The proposed estimators based on RSS dominate those based on SRS. A motivated example using real data set is given to illustrate the computation of the newly suggested estimators. 相似文献
998.
999.
《随机性模型》2013,29(2):193-227
The Double Chain Markov Model is a fully Markovian model for the representation of time-series in random environments. In this article, we show that it can handle transitions of high-order between both a set of observations and a set of hidden states. In order to reduce the number of parameters, each transition matrix can be replaced by a Mixture Transition Distribution model. We provide a complete derivation of the algorithms needed to compute the model. Three applications, the analysis of a sequence of DNA, the song of the wood pewee, and the behavior of young monkeys show that this model is of great interest for the representation of data that can be decomposed into a finite set of patterns. 相似文献
1000.
J. Andrew Howe 《Journal of Statistical Computation and Simulation》2013,83(3):446-457
In this paper, we address the problem of simulating from a data-generating process for which the observed data do not follow a regular probability distribution. One existing method for doing this is bootstrapping, but it is incapable of interpolating between observed data. For univariate or bivariate data, in which a mixture structure can easily be identified, we could instead simulate from a Gaussian mixture model. In general, though, we would have the problem of identifying and estimating the mixture model. Instead of these, we introduce a non-parametric method for simulating datasets like this: Kernel Carlo Simulation. Our algorithm begins by using kernel density estimation to build a target probability distribution. Then, an envelope function that is guaranteed to be higher than the target distribution is created. We then use simple accept–reject sampling. Our approach is more flexible than others, can simulate intelligently across gaps in the data, and requires no subjective modelling decisions. With several univariate and multivariate examples, we show that our method returns simulated datasets that, compared with the observed data, retain the covariance structures and have distributional characteristics that are remarkably similar. 相似文献