全文获取类型
收费全文 | 5734篇 |
免费 | 39篇 |
专业分类
管理学 | 588篇 |
民族学 | 72篇 |
人口学 | 1119篇 |
丛书文集 | 9篇 |
理论方法论 | 345篇 |
综合类 | 133篇 |
社会学 | 2569篇 |
统计学 | 938篇 |
出版年
2021年 | 16篇 |
2020年 | 23篇 |
2019年 | 48篇 |
2018年 | 595篇 |
2017年 | 592篇 |
2016年 | 446篇 |
2015年 | 47篇 |
2014年 | 53篇 |
2013年 | 363篇 |
2012年 | 173篇 |
2011年 | 526篇 |
2010年 | 423篇 |
2009年 | 353篇 |
2008年 | 346篇 |
2007年 | 432篇 |
2006年 | 46篇 |
2005年 | 105篇 |
2004年 | 122篇 |
2003年 | 122篇 |
2002年 | 55篇 |
2001年 | 49篇 |
2000年 | 57篇 |
1999年 | 61篇 |
1998年 | 43篇 |
1997年 | 33篇 |
1996年 | 35篇 |
1995年 | 35篇 |
1994年 | 33篇 |
1993年 | 31篇 |
1992年 | 29篇 |
1991年 | 28篇 |
1990年 | 28篇 |
1989年 | 20篇 |
1988年 | 31篇 |
1987年 | 33篇 |
1986年 | 25篇 |
1985年 | 29篇 |
1984年 | 34篇 |
1983年 | 22篇 |
1982年 | 24篇 |
1981年 | 20篇 |
1980年 | 18篇 |
1979年 | 16篇 |
1978年 | 17篇 |
1977年 | 15篇 |
1976年 | 15篇 |
1975年 | 12篇 |
1974年 | 12篇 |
1973年 | 13篇 |
1972年 | 11篇 |
排序方式: 共有5773条查询结果,搜索用时 537 毫秒
321.
We study the problem of locating facilities on the nodes of a network to maximize the expected demand serviced. The edges of the input graph are subject to random failure due to a disruptive event. We consider a special type of failure correlation. The edge dependency model assumes that the failure of a more reliable edge implies the failure of all less reliable ones. Under this dependency model called Linear Reliability Order (LRO) we give two polynomial time exact algorithms. When two distinct LRO’s exist, we prove the total unimodularity of a linear programming formulation. In addition, we show that minimizing the sum of facility opening costs and expected cost of unserviced demand under two orderings reduces to a matching problem. We prove NP-hardness of the three orderings case and show that the problem with an arbitrary number of orderings generalizes the deterministic maximum coverage problem. When a demand point can be covered only if a facility exists within a distance limit, we show that the problem is NP-hard even for a single ordering. 相似文献
322.
323.
Bráulio M. Veloso Thais R. Correa Marcos O. Prates Gabriel F. Oliveira Andréa I. Tavares 《Statistics and Computing》2017,27(4):1099-1110
Crime or disease surveillance commonly rely in space-time clustering methods to identify emerging patterns. The goal is to detect spatial-temporal clusters as soon as possible after its occurrence and to control the rate of false alarms. With this in mind, a spatio-temporal multiple cluster detection method was developed as an extension of a previous proposal based on a spatial version of the Shiryaev–Roberts statistic. Besides the capability of multiple cluster detection, the method have less input parameter than the previous proposal making its use more intuitive to practitioners. To evaluate the new methodology a simulation study is performed in several scenarios and enlighten many advantages of the proposed method. Finally, we present a case study to a crime data-set in Belo Horizonte, Brazil. 相似文献
324.
Guillermo Julián-Moreno Jorge E. López de Vergara Iván González Luis de Pedro Javier Royuela-del-Val Federico Simmross-Wattenberg 《Statistics and Computing》2017,27(5):1365-1382
\(\alpha \)-Stable distributions are a family of probability distributions found to be suitable to model many complex processes and phenomena in several research fields, such as medicine, physics, finance and networking, among others. However, the lack of closed expressions makes their evaluation analytically intractable, and alternative approaches are computationally expensive. Existing numerical programs are not fast enough for certain applications and do not make use of the parallel power of general purpose graphic processing units. In this paper, we develop novel parallel algorithms for the probability density function and cumulative distribution function—including a parallel Gauss–Kronrod quadrature—, quantile function, random number generator and maximum likelihood estimation of \(\alpha \)-stable distributions using OpenCL, achieving significant speedups and precision in all cases. Thanks to the use of OpenCL, we also evaluate the results of our library with different GPU architectures. 相似文献
325.
A common objective of cohort studies and clinical trials is to assess time-varying longitudinal continuous biomarkers as correlates of the instantaneous hazard of a study endpoint. We consider the setting where the biomarkers are measured in a designed sub-sample (i.e., case-cohort or two-phase sampling design), as is normative for prevention trials. We address this problem via joint models, with underlying biomarker trajectories characterized by a random effects model and their relationship with instantaneous risk characterized by a Cox model. For estimation and inference we extend the conditional score method of Tsiatis and Davidian (Biometrika 88(2):447–458, 2001) to accommodate the two-phase biomarker sampling design using augmented inverse probability weighting with nonparametric kernel regression. We present theoretical properties of the proposed estimators and finite-sample properties derived through simulations, and illustrate the methods with application to the AIDS Clinical Trials Group 175 antiretroviral therapy trial. We discuss how the methods are useful for evaluating a Prentice surrogate endpoint, mediation, and for generating hypotheses about biological mechanisms of treatment efficacy. 相似文献
326.
Elisa Perrone Andreas Rappold Werner G. Müller 《Statistical Methods and Applications》2017,26(3):403-418
Optimum experimental design theory has recently been extended for parameter estimation in copula models. The use of these models allows one to gain in flexibility by considering the model parameter set split into marginal and dependence parameters. However, this separation also leads to the natural issue of estimating only a subset of all model parameters. In this work, we treat this problem with the application of the \(D_s\)-optimality to copula models. First, we provide an extension of the corresponding equivalence theory. Then, we analyze a wide range of flexible copula models to highlight the usefulness of \(D_s\)-optimality in many possible scenarios. Finally, we discuss how the usage of the introduced design criterion also relates to the more general issue of copula selection and optimal design for model discrimination. 相似文献
327.
Testing for bioequivalence of highly variable drugs from TR‐RT crossover designs with heterogeneous residual variances
下载免费PDF全文
![点击此处可从《Pharmaceutical statistics》网站下载免费的PDF全文](/ch/ext_images/free.gif)
Traditional bioavailability studies assess average bioequivalence (ABE) between the test (T) and reference (R) products under the crossover design with TR and RT sequences. With highly variable (HV) drugs whose intrasubject coefficient of variation in pharmacokinetic measures is 30% or greater, assertion of ABE becomes difficult due to the large sample sizes needed to achieve adequate power. In 2011, the FDA adopted a more relaxed, yet complex, ABE criterion and supplied a procedure to assess this criterion exclusively under TRR‐RTR‐RRT and TRTR‐RTRT designs. However, designs with more than 2 periods are not always feasible. This present work investigates how to evaluate HV drugs under TR‐RT designs. A mixed model with heterogeneous residual variances is used to fit data from TR‐RT designs. Under the assumption of zero subject‐by‐formulation interaction, this basic model is comparable to the FDA‐recommended model for TRR‐RTR‐RRT and TRTR‐RTRT designs, suggesting the conceptual plausibility of our approach. To overcome the distributional dependency among summary statistics of model parameters, we develop statistical tests via the generalized pivotal quantity (GPQ). A real‐world data example is given to illustrate the utility of the resulting procedures. Our simulation study identifies a GPQ‐based testing procedure that evaluates HV drugs under practical TR‐RT designs with desirable type I error rate and reasonable power. In comparison to the FDA's approach, this GPQ‐based procedure gives similar performance when the product's intersubject standard deviation is low (≤0.4) and is most useful when practical considerations restrict the crossover design to 2 periods. 相似文献
328.
This paper is about satisficing behaviour. Rather tautologically, this is when decision-makers are satisfied with achieving some objective, rather than in obtaining the best outcome. The term was coined by Simon (Q J Econ 69:99–118, 1955), and has stimulated many discussions and theories. Prominent amongst these theories are models of incomplete preferences, models of behaviour under ambiguity, theories of rational inattention, and search theories. Most of these, however, seem to lack an answer to at least one of two key questions: when should the decision-maker (DM) satisfice; and how should the DM satisfice. In a sense, search models answer the latter question (in that the theory tells the DM when to stop searching), but not the former; moreover, usually the question as to whether any search at all is justified is left to a footnote. A recent paper by Manski (Theory Decis. doi: 10.1007/s11238-017-9592-1, 2017) fills the gaps in the literature and answers the questions: when and how to satisfice? He achieves this by setting the decision problem in an ambiguous situation (so that probabilities do not exist, and many preference functionals can therefore not be applied) and by using the Minimax Regret criterion as the preference functional. The results are simple and intuitive. This paper reports on an experimental test of his theory. The results show that some of his propositions (those relating to the ‘how’) appear to be empirically valid while others (those relating to the ‘when’) are less so. 相似文献
329.
Craig S. Webb 《Theory and Decision》2017,82(3):403-414
Choice under risk is modelled using a piecewise linear version of rank-dependent utility. This model can be considered a continuous version of NEO-expected utility (Chateauneuf et al., J Econ Theory 137:538–567, 2007). In a framework of objective probabilities, a preference foundation is given, without requiring a rich structure on the outcome set. The key axiom is called complementary additivity. 相似文献
330.
There are now three essentially separate literatures on the topics of multiple systems estimation, record linkage, and missing
data. But in practice the three are intimately intertwined. For example, record linkage involving multiple data sources for
human populations is often carried out with the expressed goal of developing a merged database for multiple system estimation
(MSE). Similarly, one way to view both the record linkage and MSE problems is as ones involving the estimation of missing
data. This presentation highlights the technical nature of these interrelationships and provides a preliminary effort at their
integration. 相似文献