全文获取类型
收费全文 | 6329篇 |
免费 | 158篇 |
国内免费 | 3篇 |
专业分类
管理学 | 993篇 |
民族学 | 54篇 |
人才学 | 9篇 |
人口学 | 499篇 |
丛书文集 | 47篇 |
理论方法论 | 770篇 |
综合类 | 51篇 |
社会学 | 3267篇 |
统计学 | 800篇 |
出版年
2023年 | 38篇 |
2022年 | 29篇 |
2021年 | 46篇 |
2020年 | 114篇 |
2019年 | 163篇 |
2018年 | 196篇 |
2017年 | 201篇 |
2016年 | 203篇 |
2015年 | 142篇 |
2014年 | 163篇 |
2013年 | 998篇 |
2012年 | 223篇 |
2011年 | 241篇 |
2010年 | 188篇 |
2009年 | 156篇 |
2008年 | 191篇 |
2007年 | 216篇 |
2006年 | 198篇 |
2005年 | 224篇 |
2004年 | 193篇 |
2003年 | 169篇 |
2002年 | 167篇 |
2001年 | 113篇 |
2000年 | 150篇 |
1999年 | 122篇 |
1998年 | 110篇 |
1997年 | 103篇 |
1996年 | 94篇 |
1995年 | 83篇 |
1994年 | 108篇 |
1993年 | 90篇 |
1992年 | 96篇 |
1991年 | 64篇 |
1990年 | 55篇 |
1989年 | 56篇 |
1988年 | 68篇 |
1987年 | 52篇 |
1986年 | 47篇 |
1985年 | 56篇 |
1984年 | 67篇 |
1983年 | 54篇 |
1982年 | 58篇 |
1981年 | 51篇 |
1980年 | 51篇 |
1979年 | 44篇 |
1978年 | 32篇 |
1977年 | 31篇 |
1976年 | 46篇 |
1975年 | 26篇 |
1974年 | 35篇 |
排序方式: 共有6490条查询结果,搜索用时 0 毫秒
101.
The Pacific Rim Library (PRL) is an initiative of the Pacific Rim Digital Library Association (PRDLA). The project began in 2006 using the OAI-PMH paradigm and now holds over 300,000 records harvested from OAI data provider libraries around the Pacific. PRL's goal is to enable the sharing of digital collections amongst PRDLA members and the world, but greater unexpected benefits have been discovered. Through mirroring their metadata, PRL increases the chance that their data will be discovered in Google and other general search engines. With its many disparate collections, PRL is not a repository for traditional information discovery and retrieval. Initially users will bounce from a Google hit to the PRL metadata record in Hong Kong and then begin an intensive search on the original site which hosts the full digital object, in Vancouver, Honolulu, Wuhan, Singapore, or other PRDLA member location. 相似文献
102.
In response surface methodology, one is usually interested in estimating the optimal conditions based on a small number of experimental runs which are designed to optimally sample the experimental space. Typically, regression models are constructed from the experimental data and interrogated in order to provide a point estimate of the independent variable settings predicted to optimize the response. Unfortunately, these point estimates are rarely accompanied with uncertainty intervals. Though classical frequentist confidence intervals can be constructed for unconstrained quadratic models, higher order, constrained or nonlinear models are often encountered in practice. Existing techniques for constructing uncertainty estimates in such situations have not been implemented widely, due in part to the need to set adjustable parameters or because of limited or difficult applicability to constrained or nonlinear problems. To address these limitations a Bayesian method of determining credible intervals for response surface optima was developed. The approach shows good coverage probabilities on two test problems, is straightforward to implement and is readily applicable to the kind of constrained and/or nonlinear problems that frequently appear in practice. 相似文献
103.
In this note we provide a counterexample which resolves conjectures about Hadamard matrices made in this journal. Beder [1998. Conjectures about Hadamard matrices. Journal of Statistical Planning and Inference 72, 7–14] conjectured that if H is a maximal m×n row-Hadamard matrix then m is a multiple of 4; and that if n is a power of 2 then every row-Hadamard matrix can be extended to a Hadamard matrix. Using binary integer programming we obtain a maximal 13×32 row-Hadamard matrix, which disproves both conjectures. Additionally for n being a multiple of 4 up to 64, we tabulate values of m for which we have found a maximal row-Hadamard matrix. Based on the tabulated results we conjecture that a m×n row-Hadamard matrix with m?n-7 can be extended to a Hadamard matrix. 相似文献
104.
Vittorio Addona Masoud Asgharian David B. Wolfson 《Revue canadienne de statistique》2009,37(2):206-218
For many diseases, logistic constraints render large incidence studies difficult to carry out. This becomes a drawback, particularly when a new study is needed each time the incidence rate is investigated in a new population. By carrying out a prevalent cohort study with follow‐up it is possible to estimate the incidence rate if it is constant. The authors derive the maximum likelihood estimator (MLE) of the overall incidence rate, λ, as well as age‐specific incidence rates, by exploiting the epidemiologic relationship, (prevalence odds) = (incidence rate) × (mean duration) (P/[1 ? P] = λ × µ). The authors establish the asymptotic distributions of the MLEs and provide approximate confidence intervals for the parameters. Moreover, the MLE of λ is asymptotically most efficient and is the natural estimator obtained by substituting the marginal maximum likelihood estimators for P and µ into P/[1 ? P] = λ × µ. Following‐up the subjects allows the authors to develop these widely applicable procedures. The authors apply their methods to data collected as part of the Canadian Study of Health and Ageing to estimate the incidence rate of dementia amongst elderly Canadians. The Canadian Journal of Statistics © 2009 Statistical Society of Canada 相似文献
105.
David Oakes 《Lifetime data analysis》2013,19(4):442-462
I review some key ideas and models in survival analysis with emphasis on modeling the effects of covariates on survival times. I focus on the proportional hazards model of Cox (J R Stat Soc B 34:187–220, 1972), its extensions and alternatives, including the accelerated life model. I briefly describe some models for competing risks data, multiple and repeated event-time data and multivariate survival data. 相似文献
106.
In many applications in applied statistics, researchers reduce the complexity of a data set by combining a group of variables into a single measure using a factor analysis or an index number. We argue that such compression loses information if the data actually have high dimensionality. We advocate the use of a non-parametric estimator, commonly used in physics (the Takens estimator), to estimate the correlation dimension of the data prior to compression. The advantage of this approach over traditional linear data compression approaches is that the data do not have to be linearised. Applying our ideas to the United Nations Human Development Index, we find that the four variables that are used in its construction have dimension 3 and the index loses information. 相似文献
107.
At a data analysis exposition sponsored by the Section on Statistical Graphics of the ASA in 1988, 15 groups of statisticians analyzed the same data about salaries of major league baseball players. By examining what they did, what worked, and what failed, we can begin to learn about the relative strengths and weaknesses of different approaches to analyzing data. The data are rich in difficulties. They require reexpression, contain errors and outliers, and exhibit nonlinear relationships. They thus pose a realistic challenge to the variety of data analysis techniques used. The analysis groups chose a wide range of model-fitting methods, including regression, principal components, factor analysis, time series, and CART. We thus have an effective framework for comparing these approaches so that we can learn more about them. Our examination shows that approaches commonly identified with Exploratory Data Analysis are substantially more effective at revealing the underlying patterns in the data and at building parsimonious, understandable models that fit the data well. We also find that common data displays, when applied carefully, are often sufficient for even complex analyses such as this. 相似文献
108.
109.
The empirical likelihood (EL) technique has been well addressed in both the theoretical and applied literature in the context of powerful nonparametric statistical methods for testing and interval estimations. A nonparametric version of Wilks theorem (Wilks, 1938) can usually provide an asymptotic evaluation of the Type I error of EL ratio-type tests. In this article, we examine the performance of this asymptotic result when the EL is based on finite samples that are from various distributions. In the context of the Type I error control, we show that the classical EL procedure and the Student's t-test have asymptotically a similar structure. Thus, we conclude that modifications of t-type tests can be adopted to improve the EL ratio test. We propose the application of the Chen (1995) t-test modification to the EL ratio test. We display that the Chen approach leads to a location change of observed data whereas the classical Bartlett method is known to be a scale correction of the data distribution. Finally, we modify the EL ratio test via both the Chen and Bartlett corrections. We support our argument with theoretical proofs as well as a Monte Carlo study. A real data example studies the proposed approach in practice. 相似文献
110.
While Markov chain Monte Carlo (MCMC) methods are frequently used for difficult calculations in a wide range of scientific disciplines, they suffer from a serious limitation: their samples are not independent and identically distributed. Consequently, estimates of expectations are biased if the initial value of the chain is not drawn from the target distribution. Regenerative simulation provides an elegant solution to this problem. In this article, we propose a simple regenerative MCMC algorithm to generate variates for any distribution. 相似文献