全文获取类型
收费全文 | 31188篇 |
免费 | 545篇 |
国内免费 | 2篇 |
专业分类
管理学 | 4404篇 |
民族学 | 127篇 |
人才学 | 3篇 |
人口学 | 2950篇 |
丛书文集 | 141篇 |
教育普及 | 2篇 |
理论方法论 | 2952篇 |
现状及发展 | 1篇 |
综合类 | 411篇 |
社会学 | 14969篇 |
统计学 | 5775篇 |
出版年
2021年 | 156篇 |
2020年 | 448篇 |
2019年 | 607篇 |
2018年 | 661篇 |
2017年 | 945篇 |
2016年 | 751篇 |
2015年 | 569篇 |
2014年 | 731篇 |
2013年 | 5114篇 |
2012年 | 989篇 |
2011年 | 908篇 |
2010年 | 716篇 |
2009年 | 650篇 |
2008年 | 781篇 |
2007年 | 793篇 |
2006年 | 751篇 |
2005年 | 751篇 |
2004年 | 704篇 |
2003年 | 640篇 |
2002年 | 712篇 |
2001年 | 790篇 |
2000年 | 759篇 |
1999年 | 686篇 |
1998年 | 539篇 |
1997年 | 493篇 |
1996年 | 506篇 |
1995年 | 470篇 |
1994年 | 435篇 |
1993年 | 482篇 |
1992年 | 520篇 |
1991年 | 492篇 |
1990年 | 447篇 |
1989年 | 427篇 |
1988年 | 437篇 |
1987年 | 394篇 |
1986年 | 361篇 |
1985年 | 407篇 |
1984年 | 417篇 |
1983年 | 383篇 |
1982年 | 347篇 |
1981年 | 279篇 |
1980年 | 291篇 |
1979年 | 324篇 |
1978年 | 281篇 |
1977年 | 269篇 |
1976年 | 235篇 |
1975年 | 270篇 |
1974年 | 208篇 |
1973年 | 188篇 |
1972年 | 154篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
21.
Jonathan H. Wright 《Econometric Reviews》2002,21(4):397-417
Many recent papers have used semiparametric methods, especially the log-periodogram regression, to detect and estimate long memory in the volatility of asset returns. In these papers, the volatility is proxied by measures such as squared, log-squared, and absolute returns. While the evidence for the existence of long memory is strong using any of these measures, the actual long memory parameter estimates can be sensitive to which measure is used. In Monte-Carlo simulations, I find that if the data is conditionally leptokurtic, the log-periodogram regression estimator using squared returns has a large downward bias, which is avoided by using other volatility measures. In United States stock return data, I find that squared returns give much lower estimates of the long memory parameter than the alternative volatility measures, which is consistent with the simulation results. I conclude that researchers should avoid using the squared returns in the semiparametric estimation of long memory volatility dependencies. 相似文献
22.
Carmen Fernández Eduardo Ley Mark F. J. Steel 《Journal of the Royal Statistical Society. Series C, Applied statistics》2002,51(3):257-280
Summary. We model daily catches of fishing boats in the Grand Bank fishing grounds. We use data on catches per species for a number of vessels collected by the European Union in the context of the Northwest Atlantic Fisheries Organization. Many variables can be thought to influence the amount caught: a number of ship characteristics (such as the size of the ship, the fishing technique used and the mesh size of the nets) are obvious candidates, but one can also consider the season or the actual location of the catch. Our database leads to 28 possible regressors (arising from six continuous variables and four categorical variables, whose 22 levels are treated separately), resulting in a set of 177 million possible linear regression models for the log-catch. Zero observations are modelled separately through a probit model. Inference is based on Bayesian model averaging, using a Markov chain Monte Carlo approach. Particular attention is paid to the prediction of catches for single and aggregated ships. 相似文献
23.
If a population contains many zero values and the sample size is not very large, the traditional normal approximation‐based confidence intervals for the population mean may have poor coverage probabilities. This problem is substantially reduced by constructing parametric likelihood ratio intervals when an appropriate mixture model can be found. In the context of survey sampling, however, there is a general preference for making minimal assumptions about the population under study. The authors have therefore investigated the coverage properties of nonparametric empirical likelihood confidence intervals for the population mean. They show that under a variety of hypothetical populations, these intervals often outperformed parametric likelihood intervals by having more balanced coverage rates and larger lower bounds. The authors illustrate their methodology using data from the Canadian Labour Force Survey for the year 2000. 相似文献
24.
Craig H. Mallinckrodt Christopher J. Kaiser John G. Watkin Michael J. Detke Geert Molenberghs Raymond J. Carroll 《Pharmaceutical statistics》2004,3(3):171-186
The last observation carried forward (LOCF) approach is commonly utilized to handle missing values in the primary analysis of clinical trials. However, recent evidence suggests that likelihood‐based analyses developed under the missing at random (MAR) framework are sensible alternatives. The objective of this study was to assess the Type I error rates from a likelihood‐based MAR approach – mixed‐model repeated measures (MMRM) – compared with LOCF when estimating treatment contrasts for mean change from baseline to endpoint (Δ). Data emulating neuropsychiatric clinical trials were simulated in a 4 × 4 factorial arrangement of scenarios, using four patterns of mean changes over time and four strategies for deleting data to generate subject dropout via an MAR mechanism. In data with no dropout, estimates of Δ and SEΔ from MMRM and LOCF were identical. In data with dropout, the Type I error rates (averaged across all scenarios) for MMRM and LOCF were 5.49% and 16.76%, respectively. In 11 of the 16 scenarios, the Type I error rate from MMRM was at least 1.00% closer to the expected rate of 5.00% than the corresponding rate from LOCF. In no scenario did LOCF yield a Type I error rate that was at least 1.00% closer to the expected rate than the corresponding rate from MMRM. The average estimate of SEΔ from MMRM was greater in data with dropout than in complete data, whereas the average estimate of SEΔ from LOCF was smaller in data with dropout than in complete data, suggesting that standard errors from MMRM better reflected the uncertainty in the data. The results from this investigation support those from previous studies, which found that MMRM provided reasonable control of Type I error even in the presence of MNAR missingness. No universally best approach to analysis of longitudinal data exists. However, likelihood‐based MAR approaches have been shown to perform well in a variety of situations and are a sensible alternative to the LOCF approach. MNAR methods can be used within a sensitivity analysis framework to test the potential presence and impact of MNAR data, thereby assessing robustness of results from an MAR method. Copyright © 2004 John Wiley & Sons, Ltd. 相似文献
25.
Seismic risk can be reduced by implementing newly developed seismic provisions in design codes. Furthermore, financial protection or enhanced utility and happiness for stakeholders could be gained through the purchase of earthquake insurance. If this is not so, there would be no market for such insurance. However, perceived benefit associated with insurance is not universally shared by stakeholders partly due to their diverse risk attitudes. This study investigates the implied seismic design preference with insurance options for decisionmakers of bounded rationality whose preferences could be adequately represented by the cumulative prospect theory (CPT). The investigation is focused on assessing the sensitivity of the implied seismic design preference with insurance options to model parameters of the CPT and to fair and unfair insurance arrangements. Numerical results suggest that human cognitive limitation and risk perception can affect the implied seismic design preference by the CPT significantly. The mandatory purchase of fair insurance will lead the implied seismic design preference to the optimum design level that is dictated by the minimum expected lifecycle cost rule. Unfair insurance decreases the expected gain as well as its associated variability, which is preferred by risk-averse decisionmakers. The obtained results of the implied preference for the combination of the seismic design level and insurance option suggest that property owners, financial institutions, and municipalities can take advantage of affordable insurance to establish successful seismic risk management strategies. 相似文献
26.
27.
Children may be more susceptible to toxicity from some environmental chemicals than adults. This susceptibility may occur during narrow age periods (windows), which can last from days to years depending on the toxicant. Breathing rates specific to narrow age periods are useful to assess inhalation dose during suspected windows of susceptibility. Because existing breathing rates used in risk assessment are typically for broad age ranges or are based on data not representative of the population, we derived daily breathing rates for narrow age ranges of children designed to be more representative of the current U.S. children's population. These rates were derived using the metabolic conversion method of Layton (1993) and energy intake data adjusted to represent the U.S. population from a relatively recent dietary survey (CSFII 1994–1996, 1998). We calculated conversion factors more specific to children than those previously used. Both nonnormalized (L/day) and normalized (L/kg-day) breathing rates were derived and found comparable to rates derived using energy estimates that are accurate for the individuals sampled but not representative of the population. Estimates of breathing rate variability within a population can be used with stochastic techniques to characterize the range of risk in the population from inhalation exposures. For each age and age-gender group, we present the mean, standard error of the mean, percentiles (50th, 90th, and 95th), geometric mean, standard deviation, 95th percentile, and best-fit parametric models of the breathing rate distributions. The standard errors characterize uncertainty in the parameter estimate, while the percentiles describe the combined interindividual and intra-individual variability of the sampled population. These breathing rates can be used for risk assessment of subchronic and chronic inhalation exposures of narrow age groups of children. 相似文献
28.
29.
Barbara Chaulk Phyllis J. Johnson Richard Bulcroft 《Journal of Family and Economic Issues》2003,24(3):257-279
Family development and prospect theory were used as a framework to predict variability in individuals' subjective financial risk tolerance within distinct family structures. Gender, age, and income were expected to interact with the main effects of family structure (marital status and children). Theory-generated hypotheses were examined in Study 1 (data from university housing respondents, n = 76) and Study 2 (the 1998 Survey of Consumer Finances, n = 4,305). One family structure main effect (child presence) was significant for investment risk tolerance in both studies. Family structure interactions (marital status × age and child × income) were significant for employment risk (Study 1), and child × age was significant for investment risk in Study 2. 相似文献
30.
Maximum likelihood estimation and goodness-of-fit techniques are used within a competing risks framework to obtain maximum likelihood estimates of hazard, density, and survivor functions for randomly right-censored variables. Goodness-of- fit techniques are used to fit distributions to the crude lifetimes, which are used to obtain an estimate of the hazard function, which, in turn, is used to construct the survivor and density functions of the net lifetime of the variable of interest. If only one of the crude lifetimes can be adequately characterized by a parametric model, then semi-parametric estimates may be obtained using a maximum likelihood estimate of one crude lifetime and the empirical distribution function of the other. Simulation studies show that the survivor function estimates from crude lifetimes compare favourably with those given by the product-limit estimator when crude lifetimes are chosen correctly. Other advantages are discussed. 相似文献