全文获取类型
收费全文 | 12794篇 |
免费 | 928篇 |
国内免费 | 134篇 |
专业分类
管理学 | 3922篇 |
民族学 | 13篇 |
人才学 | 1篇 |
人口学 | 163篇 |
丛书文集 | 376篇 |
理论方法论 | 1336篇 |
综合类 | 3855篇 |
社会学 | 2451篇 |
统计学 | 1739篇 |
出版年
2024年 | 20篇 |
2023年 | 156篇 |
2022年 | 111篇 |
2021年 | 217篇 |
2020年 | 445篇 |
2019年 | 600篇 |
2018年 | 482篇 |
2017年 | 651篇 |
2016年 | 636篇 |
2015年 | 622篇 |
2014年 | 813篇 |
2013年 | 1438篇 |
2012年 | 861篇 |
2011年 | 778篇 |
2010年 | 691篇 |
2009年 | 575篇 |
2008年 | 643篇 |
2007年 | 571篇 |
2006年 | 567篇 |
2005年 | 538篇 |
2004年 | 464篇 |
2003年 | 386篇 |
2002年 | 342篇 |
2001年 | 300篇 |
2000年 | 213篇 |
1999年 | 113篇 |
1998年 | 68篇 |
1997年 | 61篇 |
1996年 | 48篇 |
1995年 | 42篇 |
1994年 | 66篇 |
1993年 | 40篇 |
1992年 | 39篇 |
1991年 | 28篇 |
1990年 | 38篇 |
1989年 | 24篇 |
1988年 | 34篇 |
1987年 | 23篇 |
1986年 | 27篇 |
1985年 | 13篇 |
1984年 | 18篇 |
1983年 | 17篇 |
1982年 | 18篇 |
1981年 | 16篇 |
1980年 | 1篇 |
1979年 | 1篇 |
1977年 | 1篇 |
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
91.
Yasutaka Chiba 《统计学通讯:理论与方法》2013,42(23):4278-4288
Unmeasured confounding is a common problem in observational studies. This article presents simple formulae that can set the bounds of the confounding risk ratio under three standard populations of the exposed, unexposed, and total groups. The bounds are derived by considering the confounding risk ratio as a function of the prevalence of a covariate, and can be constructed using only information about either the exposure–confounder or the disease–confounder relationship. The formulae can be extended to the confounding odds ratio in case–control studies, and the confounding risk difference is discussed. The application of these formulae is demonstrated using an example in which estimation may suffer from bias due to population stratification. The formulae can help to provide a realistic picture of the potential impact of bias due to confounding. 相似文献
92.
Muhammad Aslam Ching-Ho Yen Chia-Hao Chang Chi-Hyuck Jun Munir Ahmad Mujahid Rasool 《统计学通讯:理论与方法》2013,42(20):3633-3647
In this article, a variable two-stage acceptance sampling plan is developed when the quality characteristic is evaluated through a process loss function. The plan parameters of the proposed plan are determined by using the two-point approach and tabulated according to various quality levels. Two cases are discussed when the process mean lies at the target value and when it does not, respectively. Extensive tables are provided for both cases and the results are explained with examples. The advantage of the proposed plan is compared with the existing variable single acceptance sampling plan using the process loss function. 相似文献
93.
Fan Yang 《统计学通讯:理论与方法》2013,42(3):520-532
The tail distortion risk measure at level p was first introduced in Zhu and Li (2012), where the parameter p ∈ (0, 1) indicates the confidence level. They established first-order asymptotics for this risk measure, as p↑1, for the Fréchet case. In this article, we extend their work by establishing both first-order and second-order asymptotics for the Fréchet, Weibull, and Gumbel cases. Numerical studies are also carried out to examine the accuracy of both asymptotics. 相似文献
94.
ABSTRACTWe introduce a new parsimonious bimodal distribution, referred to as the bimodal skew-symmetric Normal (BSSN) distribution, which is potentially effective in capturing bimodality, excess kurtosis, and skewness. Explicit expressions for the moment-generating function, mean, variance, skewness, and excess kurtosis were derived. The shape properties of the proposed distribution were investigated in regard to skewness, kurtosis, and bimodality. Maximum likelihood estimation was considered and an expression for the observed information matrix was provided. Illustrative examples using medical and financial data as well as simulated data from a mixture of normal distributions were worked. 相似文献
95.
ABSTRACTIn this paper we consider the tail behavior of a two-dimensional dependent renewal risk model with two dependent classes of insurance business, in which the claim sizes are governed by a common renewal counting process, and their inter-arrival times are dependent, identically distributed. For the case that the claim size distribution belongs to the intersection of long-tailed distribution class and dominant variation class, we obtain an asymptotic formula, which holds uniformly for all time in an infinite interval. Moreover, we point out that the formula still holds uniformly for all time in an infinite interval for widely dependent random variables (r.v.s) under some conditions. 相似文献
96.
ABSTRACTTraditional credit risk assessment models do not consider the time factor; they only think of whether a customer will default, but not the when to default. The result cannot provide a manager to make the profit-maximum decision. Actually, even if a customer defaults, the financial institution still can gain profit in some conditions. Nowadays, most research applied the Cox proportional hazards model into their credit scoring models, predicting the time when a customer is most likely to default, to solve the credit risk assessment problem. However, in order to fully utilize the fully dynamic capability of the Cox proportional hazards model, time-varying macroeconomic variables are required which involve more advanced data collection. Since short-term default cases are the ones that bring a great loss for a financial institution, instead of predicting when a loan will default, a loan manager is more interested in identifying those applications which may default within a short period of time when approving loan applications. This paper proposes a decision tree-based short-term default credit risk assessment model to assess the credit risk. The goal is to use the decision tree to filter the short-term default to produce a highly accurate model that could distinguish default lending. This paper integrates bootstrap aggregating (Bagging) with a synthetic minority over-sampling technique (SMOTE) into the credit risk model to improve the decision tree stability and its performance on unbalanced data. Finally, a real case of small and medium enterprise loan data that has been drawn from a local financial institution located in Taiwan is presented to further illustrate the proposed approach. After comparing the result that was obtained from the proposed approach with the logistic regression and Cox proportional hazards models, it was found that the classifying recall rate and precision rate of the proposed model was obviously superior to the logistic regression and Cox proportional hazards models. 相似文献
97.
In clinical trials with binary endpoints, the required sample size does not depend only on the specified type I error rate, the desired power and the treatment effect but also on the overall event rate which, however, is usually uncertain. The internal pilot study design has been proposed to overcome this difficulty. Here, nuisance parameters required for sample size calculation are re-estimated during the ongoing trial and the sample size is recalculated accordingly. We performed extensive simulation studies to investigate the characteristics of the internal pilot study design for two-group superiority trials where the treatment effect is captured by the relative risk. As the performance of the sample size recalculation procedure crucially depends on the accuracy of the applied sample size formula, we firstly explored the precision of three approximate sample size formulae proposed in the literature for this situation. It turned out that the unequal variance asymptotic normal formula outperforms the other two, especially in case of unbalanced sample size allocation. Using this formula for sample size recalculation in the internal pilot study design assures that the desired power is achieved even if the overall rate is mis-specified in the planning phase. The maximum inflation of the type I error rate observed for the internal pilot study design is small and lies below the maximum excess that occurred for the fixed sample size design. 相似文献
98.
In this paper a test statistic which is a modification of the W statistic for testing the goodness of fit for the two paremeter extreme value (smallest element) distribution is proposed. The test statistic Is obtained as the ratio of two linear estimates of the scale parameter. It Is shown that the suggested statistic is computationally simple and has good power properties. Percentage points of the statistic are obtained by performing Monte Carlo experiments. An example is given to illustrate the test procedure. 相似文献
99.
In this article the problem of the optimal selection and allocation of time points in repeated measures experiments is considered. D‐ optimal designs for linear regression models with a random intercept and first order auto‐regressive serial correlations are computed numerically and compared with designs having equally spaced time points. When the order of the polynomial is known and the serial correlations are not too small, the comparison shows that for any fixed number of repeated measures, a design with equally spaced time points is almost as efficient as the D‐ optimal design. When, however, there is no prior knowledge about the order of the underlying polynomial, the best choice in terms of efficiency is a D‐ optimal design for the highest possible relevant order of the polynomial. A design with equally‐spaced time points is the second best choice 相似文献
100.
James Albert 《统计学通讯:理论与方法》2013,42(16):1587-1611
In the simultaneous estimation of multinomial proportions, two estimators are developed which incorporate prior means and a prior parameter which reflects the accuracy of the prior means. These estimators possess substantially smaller risk than the standard estimator in a region of the parameter space and are much more robust than the conjugate Bayes estimator with respect to parameter values far from the prior mean. When vague prior information is available, these estimators and confidence regions derived from them appear to be attractive alternatives to the procedures based on the standard estimator. 相似文献