首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   381篇
  免费   7篇
  国内免费   1篇
管理学   96篇
人才学   1篇
人口学   10篇
丛书文集   22篇
理论方法论   14篇
综合类   63篇
社会学   24篇
统计学   159篇
  2024年   1篇
  2023年   2篇
  2022年   3篇
  2021年   6篇
  2020年   5篇
  2019年   11篇
  2018年   8篇
  2017年   14篇
  2016年   4篇
  2015年   9篇
  2014年   18篇
  2013年   80篇
  2012年   32篇
  2011年   26篇
  2010年   18篇
  2009年   20篇
  2008年   14篇
  2007年   11篇
  2006年   12篇
  2005年   10篇
  2004年   12篇
  2003年   7篇
  2002年   6篇
  2001年   10篇
  2000年   6篇
  1999年   6篇
  1998年   6篇
  1997年   8篇
  1996年   4篇
  1995年   3篇
  1994年   6篇
  1993年   1篇
  1991年   1篇
  1990年   1篇
  1987年   2篇
  1985年   1篇
  1984年   1篇
  1983年   1篇
  1981年   1篇
  1979年   1篇
  1978年   1篇
排序方式: 共有389条查询结果,搜索用时 93 毫秒
91.
Coverage and response rate challenges facing telephone and internet surveys have encouraged scientists to reconsider mail data collection methods. Although response rates to telephone surveys have declined sharply in the last 20 years, it is unclear how response rates to mail have fared during this time. This study analyzes 179 mail-back surveys of visitors to US National Parks from 1988 to 2007, which used virtually the same administration procedures throughout the period. Results show that response rates, based on only those who initially agreed to return a questionnaire, have remained at a high level with a 76% average while the number of questions and pages steadily increased. Despite this rise in response burden, rates have declined only moderately from about 80% in the late 1980s to about 70% more recently. The roles of additional contacts and survey salience in maintaining high response rates are examined. Results suggest that mail-back surveys for obtaining information from quasi-general public populations remain an effective data collection procedure.  相似文献   
92.
Use of Bayesian modelling and analysis has become commonplace in many disciplines (finance, genetics and image analysis, for example). Many complex data sets are collected which do not readily admit standard distributions, and often comprise skew and kurtotic data. Such data is well-modelled by the very flexibly-shaped distributions of the quantile distribution family, whose members are defined by the inverse of their cumulative distribution functions and rarely have analytical likelihood functions defined. Without explicit likelihood functions, Bayesian methodologies such as Gibbs sampling cannot be applied to parameter estimation for this valuable class of distributions without resorting to numerical inversion. Approximate Bayesian computation provides an alternative approach requiring only a sampling scheme for the distribution of interest, enabling easier use of quantile distributions under the Bayesian framework. Parameter estimates for simulated and experimental data are presented.  相似文献   
93.
A procedure is proposed for the assessment of bioequivalence of variabilities between two formulations in bioavailability/bioequivalence studies. This procedure is essentially a two one-sided Pitman-Morgan’s tests procedure which is based on the correlation between crossover differences and subject totals. The nonparametric version of the proposed test is also discussed. A dataset of AUC from a 2×2 crossover bioequivalence trial is presented to illustrate the proposed procedures.  相似文献   
94.
Hattis  Dale  Banati  Prerna  Goble  Rob  Burmaster  David E. 《Risk analysis》1999,19(4):711-726
This paper reviews existing data on the variability in parameters relevant for health risk analyses. We cover both exposure-related parameters and parameters related to individual susceptibility to toxicity. The toxicity/susceptibility data base under construction is part of a longer term research effort to lay the groundwork for quantitative distributional analyses of non-cancer toxic risks. These data are broken down into a variety of parameter types that encompass different portions of the pathway from external exposure to the production of biological responses. The discrete steps in this pathway, as we now conceive them, are:Contact Rate (Breathing rates per body weight; fish consumption per body weight)Uptake or Absorption as a Fraction of Intake or Contact RateGeneral Systemic Availability Net of First Pass Elimination and Dilution via Distribution Volume (e.g., initial blood concentration per mg/kg of uptake)Systemic Elimination (half life or clearance)Active Site Concentration per Systemic Blood or Plasma ConcentrationPhysiological Parameter Change per Active Site Concentration (expressed as the dose required to make a given percentage change in different people, or the dose required to achieve some proportion of an individual's maximum response to the drug or toxicant)Functional Reserve Capacity–Change in Baseline Physiological Parameter Needed to Produce a Biological Response or Pass a Criterion of Abnormal FunctionComparison of the amounts of variability observed for the different parameter types suggests that appreciable variability is associated with the final step in the process–differences among people in functional reserve capacity. This has the implication that relevant information for estimating effective toxic susceptibility distributions may be gleaned by direct studies of the population distributions of key physiological parameters in people that are not exposed to the environmental and occupational toxicants that are thought to perturb those parameters. This is illustrated with some recent observations of the population distributions of Low Density Lipoprotein Cholesterol from the second and third National Health and Nutrition Examination Surveys.  相似文献   
95.
The von Bertalanffy growth model is extended to incorporate explanatory variables. The generalized model includes the switched growth model and the seasonal growth model as special cases, and can also be used to assess the tagging effect on growth. Distribution-free and consistent estimating functions are constructed for estimation of growth parameters from tag-recapture data in which age at release is unknown. This generalizes the work of James (1991, Biometrics 47 1519–1530) who considered the classical model and allowed for individual variability in growth. A real dataset from barramundi ( Lates calcarifer ) is analysed to estimate the growth parameters and possible effect of tagging on growth.  相似文献   
96.
Progressive multi-state models provide a convenient framework for characterizing chronic disease processes where the states represent the degree of damage resulting from the disease. Incomplete data often arise in studies of such processes, and standard methods of analysis can lead to biased parameter estimates when observation of data is response-dependent. This paper describes a joint analysis useful for fitting progressive multi-state models to data arising in longitudinal studies in such settings. Likelihood based methods are described and parameters are shown to be identifiable. An EM algorithm is described for parameter estimation, and variance estimation is carried out using the Louis’ method. Simulation studies demonstrate that the proposed method works well in practice under a variety of settings. An application to data from a smoking prevention study illustrates the utility of the method.  相似文献   
97.
While academic researchers continue to debate the effect of board independence in increasing performance, its efficacy could also be reflected in whether firm performance is made more stable. Board governance activities are a constellation of actions aimed at managing agency costs and ensuring the viability of a company over time. The efficacy of such actions would, therefore, be reflected in a distal outcome, specifically, in lower firm performance variability. Boards that can control agency costs and limit both underinvestment and overinvestment would reduce a firm's deviation from its mean performance trajectory. Using a longitudinal sample of publicly traded companies in the United States, we find that board stability, board resource provision, and CEO influence are negatively associated with performance variability. Board independence is not associated with performance variability. With increasing board independence, greater board stability and greater CEO influence are negatively associated with performance variability, however, greater board resource provision is not associated with performance variability.  相似文献   
98.
ROC analysis involving two large datasets is an important method for analyzing statistics of interest for decision making of a classifier in many disciplines. And data dependency due to multiple use of the same subjects exists ubiquitously in order to generate more samples because of limited resources. Hence, a two-layer data structure is constructed and the nonparametric two-sample two-layer bootstrap is employed to estimate standard errors of statistics of interest derived from two sets of data, such as a weighted sum of two probabilities. In this article, to reduce the bootstrap variance and ensure the accuracy of computation, Monte Carlo studies of bootstrap variability were carried out to determine the appropriate number of bootstrap replications in ROC analysis with data dependency. It is suggested that with a tolerance 0.02 of the coefficient of variation, 2,000 bootstrap replications be appropriate under such circumstances.  相似文献   
99.
Central to many inferential situations is the estimation of rational functions of parameters. The mainstream in statistics and econometrics estimates these quantities based on the plug‐in approach without consideration of the main objective of the inferential situation. We propose the Bayesian Minimum Expected Loss (MELO) approach focusing explicitly on the function of interest, and calculating its frequentist variability. Asymptotic properties of the MELO estimator are similar to the plug‐in approach. Nevertheless, simulation exercises show that our proposal is better in situations characterised by small sample sizes and/or noisy data sets. In addition, we observe in the applications that our approach gives lower standard errors than frequently used alternatives when data sets are not very informative.  相似文献   
100.
A mixture experiment is an experiment in which the response is assumed to depend on the relative proportions of the ingredients present in the mixture and not on the total amount of the mixture. In such experiment process, variables do not form any portion of the mixture but the levels changed could affect the blending properties of the ingredients. Sometimes, the mixture experiments are costly and the experiments are to be conducted in less number of runs. Here, a general method for construction of efficient mixture experiments in a minimum number of runs by the method for projection of efficient response surface design onto the constrained region is obtained. The efficient designs with a less number of runs have been constructed for 3rd, 4th, and 5th component of mixture experiments with one process variable.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号