首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   179篇
  免费   3篇
  国内免费   1篇
管理学   76篇
人口学   6篇
丛书文集   7篇
理论方法论   6篇
综合类   32篇
社会学   13篇
统计学   43篇
  2024年   1篇
  2023年   2篇
  2021年   3篇
  2020年   3篇
  2019年   7篇
  2018年   5篇
  2017年   6篇
  2015年   5篇
  2014年   6篇
  2013年   21篇
  2012年   10篇
  2011年   10篇
  2010年   6篇
  2009年   11篇
  2008年   4篇
  2007年   5篇
  2006年   9篇
  2005年   8篇
  2004年   8篇
  2003年   3篇
  2002年   3篇
  2001年   9篇
  2000年   5篇
  1999年   5篇
  1998年   4篇
  1997年   7篇
  1996年   2篇
  1995年   3篇
  1994年   6篇
  1993年   1篇
  1991年   1篇
  1990年   1篇
  1987年   2篇
  1984年   1篇
排序方式: 共有183条查询结果,搜索用时 93 毫秒
41.
Conjoint choice experiments have become a powerful tool to explore individual preferences. The consistency of respondents' choices depends on the choice complexity. For example, it is easier to make a choice between two alternatives with few attributes than between five alternatives with several attributes. In the latter case it will be much harder to choose the preferred alternative which is reflected in a higher response error. Several authors have dealt with this choice complexity in the estimation stage but very little attention has been paid to set up designs that take this complexity into account. The core issue of this paper is to find out whether it is worthwhile to take this complexity into account in the design stage. We construct efficient semi-Bayesian D-optimal designs for the heteroscedastic conditional logit model which is used to model the across respondent variability that occurs due to the choice complexity. The degree of complexity is measured by the entropy, as suggested by Swait and Adamowicz (2001). The proposed designs are compared with a semi-Bayesian D-optimal design constructed without taking the complexity into account. The simulation study shows that it is much better to take the choice complexity into account when constructing conjoint choice experiments.  相似文献   
42.
Tests for the equality of variances are of interest in many areas such as quality control, agricultural production systems, experimental education, pharmacology, biology, as well as a preliminary to the analysis of variance, dose–response modelling or discriminant analysis. The literature is vast. Traditional non-parametric tests are due to Mood, Miller and Ansari–Bradley. A test which usually stands out in terms of power and robustness against non-normality is the W50 Brown and Forsythe [Robust tests for the equality of variances, J. Am. Stat. Assoc. 69 (1974), pp. 364–367] modification of the Levene test [Robust tests for equality of variances, in Contributions to Probability and Statistics, I. Olkin, ed., Stanford University Press, Stanford, 1960, pp. 278–292]. This paper deals with the two-sample scale problem and in particular with Levene type tests. We consider 10 Levene type tests: the W50, the M50 and L50 tests [G. Pan, On a Levene type test for equality of two variances, J. Stat. Comput. Simul. 63 (1999), pp. 59–71], the R-test [R.G. O'Brien, A general ANOVA method for robust tests of additive models for variances, J. Am. Stat. Assoc. 74 (1979), pp. 877–880], as well as the bootstrap and permutation versions of the W50, L50 and R tests. We consider also the F-test, the modified Fligner and Killeen [Distribution-free two-sample tests for scale, J. Am. Stat. Assoc. 71 (1976), pp. 210–213] test, an adaptive test due to Hall and Padmanabhan [Adaptive inference for the two-sample scale problem, Technometrics 23 (1997), pp. 351–361] and the two tests due to Shoemaker [Tests for differences in dispersion based on quantiles, Am. Stat. 49(2) (1995), pp. 179–182; Interquantile tests for dispersion in skewed distributions, Commun. Stat. Simul. Comput. 28 (1999), pp. 189–205]. The aim is to identify the effective methods for detecting scale differences. Our study is different with respect to the other ones since it is focused on resampling versions of the Levene type tests, and many tests considered here have not ever been proposed and/or compared. The computationally simplest test found robust is W50. Higher power, while preserving robustness, is achieved by considering the resampling version of Levene type tests like the permutation R-test (recommended for normal- and light-tailed distributions) and the bootstrap L50 test (recommended for heavy-tailed and skewed distributions). Among non-Levene type tests, the best one is the adaptive test due to Hall and Padmanabhan.  相似文献   
43.
This paper provides a novel approach to ordering signals based on the property that more informative signals lead to greater variability of conditional expectations. We define two nested information criteria (supermodular precision and integral precision) by combining this approach with two variability orders (dispersive and convex orders). We relate precision criteria with orderings based on the value of information to a decision maker. We then use precision to study the incentives of an auctioneer to supply private information. Using integral precision, we obtain two results: (i) a more precise signal yields a more efficient allocation; (ii) the auctioneer provides less than the efficient level of information. Supermodular precision allows us to extend the previous analysis to the case in which supplying information is costly and to obtain an additional finding; (iii) there is a complementarity between information and competition, so that both the socially efficient and the auctioneer's optimal choice of precision increase with the number of bidders.  相似文献   
44.
We analyze the benefits of inventory pooling in a multi‐location newsvendor framework. Using a number of common demand distributions, as well as the distribution‐free approximation, we compare the centralized (pooled) system with the decentralized (non‐pooled) system. We investigate the sensitivity of the absolute and relative reduction in costs to the variability of demand and to the number of locations (facilities) being pooled. We show that for the distributions considered, the absolute benefit of risk pooling increases with variability, and the relative benefit stays fairly constant, as long as the coefficient of variation of demand stays in the low range. However, under high‐variability conditions, both measures decrease to zero as the demand variability is increased. We show, through analytical results and computational experiments, that these effects are due to the different operating regimes exhibited by the system under different levels of variability: as the variability is increased, the system switches from the normal operation to the effective and then complete shutdown regimes; the decrease in the benefits of risk pooling is associated with the two latter stages. The centralization allows the system to remain in the normal operation regime under higher levels of variability compared to the decentralized system.  相似文献   
45.
An exponentially weighted moving average (EWMA) control chart of squared distance is developed by means of a double EWMA approach to monitor process dispersion with individual measurements distributed within the class of elliptically symmetric distributions. Several examples highlighting possible extensions of the control chart to multivariate processes are provided. In particular, for multivariate normal processes, an investigation on the detection power of the chart is carried out through Monte Carlo studies. The results show that the proposed control chart performs well, especially when a process has a small or moderate shift.  相似文献   
46.
Adam M. Finkel 《Risk analysis》2014,34(10):1785-1794
If exposed to an identical concentration of a carcinogen, every human being would face a different level of risk, determined by his or her genetic, environmental, medical, and other uniquely individual characteristics. Various lines of evidence indicate that this susceptibility variable is distributed rather broadly in the human population, with perhaps a factor of 25‐ to 50‐fold between the center of this distribution and either of its tails, but cancer risk assessment at the EPA and elsewhere has always treated every (adult) human as identically susceptible. The National Academy of Sciences “Silver Book” concluded that EPA and the other agencies should fundamentally correct their mis‐computation of carcinogenic risk in two ways: (1) adjust individual risk estimates upward to provide information about the upper tail; and (2) adjust population risk estimates upward (by about sevenfold) to correct an underestimation due to a mathematical property of the interindividual distribution of human susceptibility, in which the susceptibility averaged over the entire (right‐skewed) population exceeds the median value for the typical human. In this issue of Risk Analysis, Kenneth Bogen disputes the second adjustment and endorses the first, though he also relegates the problem of underestimated individual risks to the realm of “equity concerns” that he says should have little if any bearing on risk management policy. In this article, I show why the basis for the population risk adjustment that the NAS recommended is correct—that current population cancer risk estimates, whether they are derived from animal bioassays or from human epidemiologic studies, likely provide estimates of the median with respect to human variation, which in turn must be an underestimate of the mean. If cancer risk estimates have larger “conservative” biases embedded in them, a premise I have disputed in many previous writings, such a defect would not excuse ignoring this additional bias in the direction of underestimation. I also demonstrate that sensible, legally appropriate, and ethical risk policy must not only inform the public when the tail of the individual risk distribution extends into the “high‐risk” range, but must alter benefit‐cost balancing to account for the need to try to reduce these tail risks preferentially.  相似文献   
47.
A procedure is proposed for the assessment of bioequivalence of variabilities between two formulations in bioavailability/bioequivalence studies. This procedure is essentially a two one-sided Pitman-Morgan’s tests procedure which is based on the correlation between crossover differences and subject totals. The nonparametric version of the proposed test is also discussed. A dataset of AUC from a 2×2 crossover bioequivalence trial is presented to illustrate the proposed procedures.  相似文献   
48.
Hattis  Dale  Banati  Prerna  Goble  Rob  Burmaster  David E. 《Risk analysis》1999,19(4):711-726
This paper reviews existing data on the variability in parameters relevant for health risk analyses. We cover both exposure-related parameters and parameters related to individual susceptibility to toxicity. The toxicity/susceptibility data base under construction is part of a longer term research effort to lay the groundwork for quantitative distributional analyses of non-cancer toxic risks. These data are broken down into a variety of parameter types that encompass different portions of the pathway from external exposure to the production of biological responses. The discrete steps in this pathway, as we now conceive them, are:Contact Rate (Breathing rates per body weight; fish consumption per body weight)Uptake or Absorption as a Fraction of Intake or Contact RateGeneral Systemic Availability Net of First Pass Elimination and Dilution via Distribution Volume (e.g., initial blood concentration per mg/kg of uptake)Systemic Elimination (half life or clearance)Active Site Concentration per Systemic Blood or Plasma ConcentrationPhysiological Parameter Change per Active Site Concentration (expressed as the dose required to make a given percentage change in different people, or the dose required to achieve some proportion of an individual's maximum response to the drug or toxicant)Functional Reserve Capacity–Change in Baseline Physiological Parameter Needed to Produce a Biological Response or Pass a Criterion of Abnormal FunctionComparison of the amounts of variability observed for the different parameter types suggests that appreciable variability is associated with the final step in the process–differences among people in functional reserve capacity. This has the implication that relevant information for estimating effective toxic susceptibility distributions may be gleaned by direct studies of the population distributions of key physiological parameters in people that are not exposed to the environmental and occupational toxicants that are thought to perturb those parameters. This is illustrated with some recent observations of the population distributions of Low Density Lipoprotein Cholesterol from the second and third National Health and Nutrition Examination Surveys.  相似文献   
49.
The von Bertalanffy growth model is extended to incorporate explanatory variables. The generalized model includes the switched growth model and the seasonal growth model as special cases, and can also be used to assess the tagging effect on growth. Distribution-free and consistent estimating functions are constructed for estimation of growth parameters from tag-recapture data in which age at release is unknown. This generalizes the work of James (1991, Biometrics 47 1519–1530) who considered the classical model and allowed for individual variability in growth. A real dataset from barramundi ( Lates calcarifer ) is analysed to estimate the growth parameters and possible effect of tagging on growth.  相似文献   
50.
While academic researchers continue to debate the effect of board independence in increasing performance, its efficacy could also be reflected in whether firm performance is made more stable. Board governance activities are a constellation of actions aimed at managing agency costs and ensuring the viability of a company over time. The efficacy of such actions would, therefore, be reflected in a distal outcome, specifically, in lower firm performance variability. Boards that can control agency costs and limit both underinvestment and overinvestment would reduce a firm's deviation from its mean performance trajectory. Using a longitudinal sample of publicly traded companies in the United States, we find that board stability, board resource provision, and CEO influence are negatively associated with performance variability. Board independence is not associated with performance variability. With increasing board independence, greater board stability and greater CEO influence are negatively associated with performance variability, however, greater board resource provision is not associated with performance variability.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号