全文获取类型
收费全文 | 26318篇 |
免费 | 759篇 |
国内免费 | 2篇 |
专业分类
管理学 | 3506篇 |
民族学 | 138篇 |
人口学 | 2340篇 |
丛书文集 | 105篇 |
教育普及 | 1篇 |
理论方法论 | 2348篇 |
综合类 | 301篇 |
社会学 | 12964篇 |
统计学 | 5376篇 |
出版年
2023年 | 163篇 |
2021年 | 155篇 |
2020年 | 434篇 |
2019年 | 648篇 |
2018年 | 732篇 |
2017年 | 999篇 |
2016年 | 740篇 |
2015年 | 531篇 |
2014年 | 715篇 |
2013年 | 4734篇 |
2012年 | 919篇 |
2011年 | 848篇 |
2010年 | 656篇 |
2009年 | 541篇 |
2008年 | 726篇 |
2007年 | 671篇 |
2006年 | 678篇 |
2005年 | 539篇 |
2004年 | 539篇 |
2003年 | 454篇 |
2002年 | 494篇 |
2001年 | 617篇 |
2000年 | 550篇 |
1999年 | 529篇 |
1998年 | 419篇 |
1997年 | 376篇 |
1996年 | 386篇 |
1995年 | 385篇 |
1994年 | 323篇 |
1993年 | 368篇 |
1992年 | 415篇 |
1991年 | 397篇 |
1990年 | 399篇 |
1989年 | 342篇 |
1988年 | 338篇 |
1987年 | 301篇 |
1986年 | 316篇 |
1985年 | 324篇 |
1984年 | 321篇 |
1983年 | 280篇 |
1982年 | 244篇 |
1981年 | 185篇 |
1980年 | 239篇 |
1979年 | 266篇 |
1978年 | 200篇 |
1977年 | 184篇 |
1976年 | 154篇 |
1975年 | 175篇 |
1974年 | 154篇 |
1972年 | 121篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
61.
Jonathan H. Wright 《Econometric Reviews》2002,21(4):397-417
Many recent papers have used semiparametric methods, especially the log-periodogram regression, to detect and estimate long memory in the volatility of asset returns. In these papers, the volatility is proxied by measures such as squared, log-squared, and absolute returns. While the evidence for the existence of long memory is strong using any of these measures, the actual long memory parameter estimates can be sensitive to which measure is used. In Monte-Carlo simulations, I find that if the data is conditionally leptokurtic, the log-periodogram regression estimator using squared returns has a large downward bias, which is avoided by using other volatility measures. In United States stock return data, I find that squared returns give much lower estimates of the long memory parameter than the alternative volatility measures, which is consistent with the simulation results. I conclude that researchers should avoid using the squared returns in the semiparametric estimation of long memory volatility dependencies. 相似文献
62.
Craig H. Mallinckrodt Christopher J. Kaiser John G. Watkin Michael J. Detke Geert Molenberghs Raymond J. Carroll 《Pharmaceutical statistics》2004,3(3):171-186
The last observation carried forward (LOCF) approach is commonly utilized to handle missing values in the primary analysis of clinical trials. However, recent evidence suggests that likelihood‐based analyses developed under the missing at random (MAR) framework are sensible alternatives. The objective of this study was to assess the Type I error rates from a likelihood‐based MAR approach – mixed‐model repeated measures (MMRM) – compared with LOCF when estimating treatment contrasts for mean change from baseline to endpoint (Δ). Data emulating neuropsychiatric clinical trials were simulated in a 4 × 4 factorial arrangement of scenarios, using four patterns of mean changes over time and four strategies for deleting data to generate subject dropout via an MAR mechanism. In data with no dropout, estimates of Δ and SEΔ from MMRM and LOCF were identical. In data with dropout, the Type I error rates (averaged across all scenarios) for MMRM and LOCF were 5.49% and 16.76%, respectively. In 11 of the 16 scenarios, the Type I error rate from MMRM was at least 1.00% closer to the expected rate of 5.00% than the corresponding rate from LOCF. In no scenario did LOCF yield a Type I error rate that was at least 1.00% closer to the expected rate than the corresponding rate from MMRM. The average estimate of SEΔ from MMRM was greater in data with dropout than in complete data, whereas the average estimate of SEΔ from LOCF was smaller in data with dropout than in complete data, suggesting that standard errors from MMRM better reflected the uncertainty in the data. The results from this investigation support those from previous studies, which found that MMRM provided reasonable control of Type I error even in the presence of MNAR missingness. No universally best approach to analysis of longitudinal data exists. However, likelihood‐based MAR approaches have been shown to perform well in a variety of situations and are a sensible alternative to the LOCF approach. MNAR methods can be used within a sensitivity analysis framework to test the potential presence and impact of MNAR data, thereby assessing robustness of results from an MAR method. Copyright © 2004 John Wiley & Sons, Ltd. 相似文献
63.
Professor Stephen Senn Dr Dipti Amin Professor Rosemary A. Bailey Professor Sheila M. Bird FFPH Dr Barbara Bogacka Mr Peter Colman Dr rew Garrett Professor rew Grieve Professor Sir Peter Lachmann FRS FMedSci 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2007,170(3):517-579
64.
Seismic risk can be reduced by implementing newly developed seismic provisions in design codes. Furthermore, financial protection or enhanced utility and happiness for stakeholders could be gained through the purchase of earthquake insurance. If this is not so, there would be no market for such insurance. However, perceived benefit associated with insurance is not universally shared by stakeholders partly due to their diverse risk attitudes. This study investigates the implied seismic design preference with insurance options for decisionmakers of bounded rationality whose preferences could be adequately represented by the cumulative prospect theory (CPT). The investigation is focused on assessing the sensitivity of the implied seismic design preference with insurance options to model parameters of the CPT and to fair and unfair insurance arrangements. Numerical results suggest that human cognitive limitation and risk perception can affect the implied seismic design preference by the CPT significantly. The mandatory purchase of fair insurance will lead the implied seismic design preference to the optimum design level that is dictated by the minimum expected lifecycle cost rule. Unfair insurance decreases the expected gain as well as its associated variability, which is preferred by risk-averse decisionmakers. The obtained results of the implied preference for the combination of the seismic design level and insurance option suggest that property owners, financial institutions, and municipalities can take advantage of affordable insurance to establish successful seismic risk management strategies. 相似文献
65.
Amy H. Herring Joseph G. Ibrahim Stuart R. Lipsitz 《Journal of the Royal Statistical Society. Series C, Applied statistics》2004,53(2):293-310
Summary. Non-ignorable missing data, a serious problem in both clinical trials and observational studies, can lead to biased inferences. Quality-of-life measures have become increasingly popular in clinical trials. However, these measures are often incompletely observed, and investigators may suspect that missing quality-of-life data are likely to be non-ignorable. Although several recent references have addressed missing covariates in survival analysis, they all required the assumption that missingness is at random or that all covariates are discrete. We present a method for estimating the parameters in the Cox proportional hazards model when missing covariates may be non-ignorable and continuous or discrete. Our method is useful in reducing the bias and improving efficiency in the presence of missing data. The methodology clearly specifies assumptions about the missing data mechanism and, through sensitivity analysis, helps investigators to understand the potential effect of missing data on study results. 相似文献
66.
Time, Self, and the Curiously Abstract Concept of Agency* 总被引:2,自引:0,他引:2
The term "agency" is quite slippery and is used differently depending on the epistemological roots and goals of scholars who employ it. Distressingly, the sociological literature on the concept rarely addresses relevant social psychological research. We take a social behaviorist approach to agency by suggesting that individual temporal orientations are underutilized in conceptualizing this core sociological concept. Different temporal foci—the actor's engaged response to situational circumstances—implicate different forms of agency. This article offers a theoretical model involving four analytical types of agency ("existential,""identity,""pragmatic," and "life course") that are often conflated across treatments of the topic. Each mode of agency overlaps with established social psychological literatures, most notably about the self, enabling scholars to anchor overly abstract treatments of agency within established research literatures. 相似文献
67.
Sarah E. H. Moore 《Sociology Compass》2008,2(1):268-280
This article provides a critical review of literature on the relationship between gender and the 'new paradigm' of health. An overview of the feminist critique of health is given, from the Women's Health Movement of the late 1960s and early feminist debates about medicalisation, to more recent discussions about structural inequalities between men and women, eating disorders, and AIDS. I then go on to explore the feminist response to the so-called 'new paradigm' of health (an approach that emphasises health promotion, individual responsibility for health, and body-monitoring). Arguments that health promotion initiatives target women and confirm their position as principal guardians of health within the family are considered, as well as literature on the breast cancer awareness campaign. I then explore the growing body of literature on masculinity and health, and its account of the relationship between gender and current ideas about healthiness. Finally, I offer up some suggestions for the direction a new feminist critique of health might take. 相似文献
68.
Stephanie M. Pickle Timothy J. Robinson Jeffrey B. Birch Christine M. Anderson-Cook 《Journal of statistical planning and inference》2008
Parameter design or robust parameter design (RPD) is an engineering methodology intended as a cost-effective approach for improving the quality of products and processes. The goal of parameter design is to choose the levels of the control variables that optimize a defined quality characteristic. An essential component of RPD involves the assumption of well estimated models for the process mean and variance. Traditionally, the modeling of the mean and variance has been done parametrically. It is often the case, particularly when modeling the variance, that nonparametric techniques are more appropriate due to the nature of the curvature in the underlying function. Most response surface experiments involve sparse data. In sparse data situations with unusual curvature in the underlying function, nonparametric techniques often result in estimates with problematic variation whereas their parametric counterparts may result in estimates with problematic bias. We propose the use of semi-parametric modeling within the robust design setting, combining parametric and nonparametric functions to improve the quality of both mean and variance model estimation. The proposed method will be illustrated with an example and simulations. 相似文献
69.
Chunming M. Zhang 《Revue canadienne de statistique》2003,31(2):151-171
Many applications of nonparametric tests based on curve estimation involve selecting a smoothing parameter. The author proposes an adaptive test that combines several generalized likelihood ratio tests in order to get power performance nearly equal to whichever of the component tests is best. She derives the asymptotic joint distribution of the component tests and that of the proposed test under the null hypothesis. She also develops a simple method of selecting the smoothing parameters for the proposed test and presents two approximate methods for obtaining its P‐value. Finally, she evaluates the proposed test through simulations and illustrates its application to a set of real data. 相似文献
70.
Owing to the extreme quantiles involved, standard control charts are very sensitive to the effects of parameter estimation and non-normality. More general parametric charts have been devised to deal with the latter complication and corrections have been derived to compensate for the estimation step, both under normal and parametric models. The resulting procedures offer a satisfactory solution over a broad range of underlying distributions. However, situations do occur where even such a large model is inadequate and nothing remains but to consider non- parametric charts. In principle, these form ideal solutions, but the problem is that huge sample sizes are required for the estimation step. Otherwise the resulting stochastic error is so large that the chart is very unstable, a disadvantage that seems to outweigh the advantage of avoiding the model error from the parametric case. Here we analyse under what conditions non-parametric charts actually become feasible alternatives for their parametric counterparts. In particular, corrected versions are suggested for which a possible change point is reached at sample sizes that are markedly less huge (but still larger than the customary range). These corrections serve to control the behaviour during in-control (markedly wrong outcomes of the estimates only occur sufficiently rarely). The price for this protection will clearly be some loss of detection power during out-of-control. A change point comes in view as soon as this loss can be made sufficiently small. 相似文献