全文获取类型
收费全文 | 23052篇 |
免费 | 617篇 |
国内免费 | 3篇 |
专业分类
管理学 | 3177篇 |
民族学 | 128篇 |
人才学 | 6篇 |
人口学 | 2070篇 |
丛书文集 | 126篇 |
理论方法论 | 2123篇 |
综合类 | 339篇 |
社会学 | 11307篇 |
统计学 | 4396篇 |
出版年
2023年 | 117篇 |
2021年 | 128篇 |
2020年 | 335篇 |
2019年 | 502篇 |
2018年 | 652篇 |
2017年 | 864篇 |
2016年 | 615篇 |
2015年 | 474篇 |
2014年 | 582篇 |
2013年 | 4050篇 |
2012年 | 800篇 |
2011年 | 679篇 |
2010年 | 600篇 |
2009年 | 521篇 |
2008年 | 533篇 |
2007年 | 566篇 |
2006年 | 573篇 |
2005年 | 526篇 |
2004年 | 456篇 |
2003年 | 402篇 |
2002年 | 453篇 |
2001年 | 605篇 |
2000年 | 523篇 |
1999年 | 466篇 |
1998年 | 358篇 |
1997年 | 314篇 |
1996年 | 353篇 |
1995年 | 317篇 |
1994年 | 336篇 |
1993年 | 313篇 |
1992年 | 344篇 |
1991年 | 356篇 |
1990年 | 335篇 |
1989年 | 306篇 |
1988年 | 320篇 |
1987年 | 309篇 |
1986年 | 248篇 |
1985年 | 320篇 |
1984年 | 327篇 |
1983年 | 288篇 |
1982年 | 239篇 |
1981年 | 179篇 |
1980年 | 203篇 |
1979年 | 230篇 |
1978年 | 182篇 |
1977年 | 162篇 |
1976年 | 142篇 |
1975年 | 158篇 |
1974年 | 158篇 |
1973年 | 127篇 |
排序方式: 共有10000条查询结果,搜索用时 171 毫秒
871.
Most studies of quality improvement deal with ordered categorical data from industrial experiments. Accounting for the ordering of such data plays an important role in effectively determining the optimal factor level of combination. This paper utilizes the correspondence analysis to develop a procedure to improve the ordered categorical response in a multifactor state system based on Taguchi's statistic. Users may find the proposed procedure in this paper to be attractive because we suggest a simple and also popular statistical tool for graphically identifying the really important factors and determining the levels to improve process quality. A case study for optimizing the polysilicon deposition process in a very large-scale integrated circuit is provided to demonstrate the effectiveness of the proposed procedure. 相似文献
872.
J. Fredrik Lindström 《Journal of applied statistics》2009,36(12):1369-1384
When VAR models are used to predict future outcomes, the forecast error can be substantial. Through imposition of restrictions on the off-diagonal elements of the parameter matrix, however, the information in the process may be condensed to the marginal processes. In particular, if the cross-autocorrelations in the system are small and only a small sample is available, then such a restriction may reduce the forecast mean squared error considerably.
In this paper, we propose three different techniques to decide whether to use the restricted or unrestricted model, i.e. the full VAR(1) model or only marginal AR(1) models. In a Monte Carlo simulation study, all three proposed tests have been found to behave quite differently depending on the parameter setting. One of the proposed tests stands out, however, as the preferred one and is shown to outperform other estimators for a wide range of parameter settings. 相似文献
873.
The purpose of this article is to compare efficiencies of several cluster randomized designs using the method of quantile dispersion graphs (QDGs). A cluster randomized design is considered whenever subjects are randomized at a group level but analyzed at the individual level. A prior knowledge of the correlation existing between subjects within the same cluster is necessary to design these cluster randomized trials. Using the QDG approach, we are able to compare several cluster randomized designs without requiring any information on the intracluster correlation. For a given design, several quantiles of the power function, which are directly related to the effect size, are obtained for several effect sizes. The quantiles depend on the intracluster correlation present in the model. The dispersion of these quantiles over the space of the unknown intracluster correlation is determined, and then depicted by the QDGs. Two applications of the proposed methodology are presented. 相似文献
874.
Agustín Hernández Bastida José María Pérez Sánchez 《Journal of applied statistics》2009,36(8):853-869
The distribution of the aggregate claims in one year plays an important role in Actuarial Statistics for computing, for example, insurance premiums when both the number and size of the claims must be implemented into the model. When the number of claims follows a Poisson distribution the aggregated distribution is called the compound Poisson distribution. In this article we assume that the claim size follows an exponential distribution and later we make an extensive study of this model by assuming a bidimensional prior distribution for the parameters of the Poisson and exponential distribution with marginal gamma. This study carries us to obtain expressions for net premiums, marginal and posterior distributions in terms of some well-known special functions used in statistics. Later, a Bayesian robustness study of this model is made. Bayesian robustness on bidimensional models was deeply treated in the 1990s, producing numerous results, but few applications dealing with this problem can be found in the literature. 相似文献
875.
An approach to the analysis of time-dependent ordinal quality score data from robust design experiments is developed and applied to an experiment from commercial horticultural research, using concepts of product robustness and longevity that are familiar to analysts in engineering research. A two-stage analysis is used to develop models describing the effects of a number of experimental treatments on the rate of post-sales product quality decline. The first stage uses a polynomial function on a transformed scale to approximate the quality decline for an individual experimental unit using derived coefficients and the second stage uses a joint mean and dispersion model to investigate the effects of the experimental treatments on these derived coefficients. The approach, developed specifically for an application in horticulture, is exemplified with data from a trial testing ornamental plants that are subjected to a range of treatments during production and home-life. The results of the analysis show how a number of control and noise factors affect the rate of post-production quality decline. Although the model is used to analyse quality data from a trial on ornamental plants, the approach developed is expected to be more generally applicable to a wide range of other complex production systems. 相似文献
876.
For capture–recapture models when covariates are subject to measurement errors and missing data, a set of estimating equations is constructed to estimate population size and relevant parameters. These estimating equations can be solved by an algorithm similar to the EM algorithm. The proposed method is also applicable to the situation when covariates with no measurement errors have missing data. Simulation studies are used to assess the performance of the proposed estimator. The estimator is also applied to a capture–recapture experiment on the bird species Prinia flaviventris in Hong Kong. The Canadian Journal of Statistics 37: 645–658; 2009 © 2009 Statistical Society of Canada 相似文献
877.
Yaling Yin Christine E. Soteros Miķelis G. Bickis 《Journal of statistical planning and inference》2009
Traditional multiple hypothesis testing procedures fix an error rate and determine the corresponding rejection region. In 2002 Storey proposed a fixed rejection region procedure and showed numerically that it can gain more power than the fixed error rate procedure of Benjamini and Hochberg while controlling the same false discovery rate (FDR). In this paper it is proved that when the number of alternatives is small compared to the total number of hypotheses, Storey's method can be less powerful than that of Benjamini and Hochberg. Moreover, the two procedures are compared by setting them to produce the same FDR. The difference in power between Storey's procedure and that of Benjamini and Hochberg is near zero when the distance between the null and alternative distributions is large, but Benjamini and Hochberg's procedure becomes more powerful as the distance decreases. It is shown that modifying the Benjamini and Hochberg procedure to incorporate an estimate of the proportion of true null hypotheses as proposed by Black gives a procedure with superior power. 相似文献
878.
In this article, we introduce three new distribution-free Shewhart-type control charts that exploit run and Wilcoxon-type rank-sum statistics to detect possible shifts of a monitored process. Exact formulae for the alarm rate, the run length distribution, and the average run length (ARL) are all derived. A key advantage of these charts is that, due to their nonparametric nature, the false alarm rate (FAR) and in-control run length distribution is the same for all continuous process distributions. Tables are provided for the implementation of the charts for some typical FAR values. Furthermore, a numerical study carried out reveals that the new charts are quite flexible and efficient in detecting shifts to Lehmann-type out-of-control situations. 相似文献
879.
John Ermisch Diego Gambetta Heather Laurie Thomas Siedler S. C. Noah Uhrig 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2009,172(4):749-769
Summary. We measure trust and trustworthiness in British society with a newly designed experiment using real monetary rewards and a sample of the British population. The study also asks the typical survey question that aims to measure trust, showing that it does not predict 'trust' as measured in the experiment. Overall, about 40% of people were willing to trust a stranger in our experiment, and their trust was rewarded half of the time. Analysis of variation in the trust behaviour in our survey suggests that trusting is more likely if people are older, their financial situation is either 'comfortable' or 'difficult' compared with 'doing alright' or 'just getting by', they are a homeowner or they are divorced, separated or never married compared with those who are married or cohabiting. Trustworthiness also is more likely among subjects who are divorced or separated relative to those who are married or cohabiting, and less likely among subjects who perceive their financial situation as 'just getting by' or 'difficult'. We also analyse the effect of attitudes towards risks on trust. 相似文献
880.
Using relative utility curves to evaluate risk prediction 总被引:2,自引:0,他引:2
Stuart G. Baker Nancy R. Cook rew Vickers Barnett S. Kramer 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2009,172(4):729-748
Summary. Because many medical decisions are based on risk prediction models that are constructed from medical history and results of tests, the evaluation of these prediction models is important. This paper makes five contributions to this evaluation: the relative utility curve which gauges the potential for better prediction in terms of utilities, without the need for a reference level for one utility, while providing a sensitivity analysis for misspecification of utilities, the relevant region, which is the set of values of prediction performance that are consistent with the recommended treatment status in the absence of prediction, the test threshold, which is the minimum number of tests that would be traded for a true positive prediction in order for the expected utility to be non-negative, the evaluation of two-stage predictions that reduce test costs and connections between various measures of performance of prediction. An application involving the risk of cardiovascular disease is discussed. 相似文献